ACL-OCL / Base_JSON /prefixC /json /constraint /2022.constraint-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:12:34.620426Z"
},
"title": "Detecting False Claims in Low-Resource Regions: A Case Study of Caribbean Islands",
"authors": [
{
"first": "Jason",
"middle": [
"S"
],
"last": "Lucas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Pennsylvania State University",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Pennsylvania State University",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Thai",
"middle": [],
"last": "Le",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Pennsylvania State University",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Pennsylvania State University",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The COVID-19 pandemic has created severe threats to global health control. In particular, misinformation circulated on social media and news outlets has undermined public trust in government and health agencies. This problem is further exacerbated in developing countries or low-resource regions where the news may not be equipped with abundant English fact-checking information. This poses a question: \"are existing computational solutions toward misinformation also effective in lowresource regions?\" In this paper, to answer this question, we make the first attempt to detect COVID-19 misinformation in English, Spanish, and Haitian French populated in the Caribbean region, using the fact-checked claims in US-English. We started by collecting a dataset of real & false claims in the Caribbean region. Then we trained several classification and language models on COVID-19 from highresource language regions and transferred this knowledge to the Caribbean claim dataset. The experimental results show the limitations of current false claim detection in low-resource regions and encourage further research toward the detection of multilingual false claims in long tail.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "The COVID-19 pandemic has created severe threats to global health control. In particular, misinformation circulated on social media and news outlets has undermined public trust in government and health agencies. This problem is further exacerbated in developing countries or low-resource regions where the news may not be equipped with abundant English fact-checking information. This poses a question: \"are existing computational solutions toward misinformation also effective in lowresource regions?\" In this paper, to answer this question, we make the first attempt to detect COVID-19 misinformation in English, Spanish, and Haitian French populated in the Caribbean region, using the fact-checked claims in US-English. We started by collecting a dataset of real & false claims in the Caribbean region. Then we trained several classification and language models on COVID-19 from highresource language regions and transferred this knowledge to the Caribbean claim dataset. The experimental results show the limitations of current false claim detection in low-resource regions and encourage further research toward the detection of multilingual false claims in long tail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this work, we refer to false claim as assertions that are not supported by facts and are made with the objective of misleading or deceiving the public (Molina et al., 2021) . Social media platforms enable people to independently publish and share media content without scrutiny filters for credibility and integrity 1 . Therefore, inaccurate, false, malicious, and propagandistic content have become abundant in social media. Furthermore, when false claims travel across regions and often get translated/modified, it becomes increasingly difficult for machine learning (ML) models to detect such false claims. Online surveillance (i.e., false claim detectors) systems are often primarily pre-trained on high-resource languages (e.g., English, Chinese). Despite significant progress in ML models, however, building and maintaining ML models in lowresource languages (e.g., Tagalog, Haitian Creole) are still challenging due to its scarce data or language lexicon and translation barriers which are indigenous to low-resource language settings.",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Molina et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This poses a natural question: \"how effective are computational ML solutions developed in high-resource regions to detect false claims circulating in low-resource regions?\" In this paper, to answer this question, we propose the first thorough case study on the detection of false claims in the Caribbean Islands.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Fact-checking initiatives are scarce and inept in low-resource settings, especially for the Caribbean Islands due to the cultural and linguistically diverse nature of their languages. The Caribbean region is a developing, heterogeneous, interconnected archipelago that is vulnerable to false claims campaigns. It consists of 35 states and territories bordering the Gulf of Mexico and Caribbean Sea 2 . The Caribbean has six official languages: Spanish, English, French, and Dutch, as well as two indigenous Creoles (Haitian Creole and Papiamento) 3 . Our data curation initiative shows that this region lacks essential technological resources and infrastructure to combat false claim propagation. Few fact-checking organizations exist, and they have limited data covering the Caribbean. Major news outlets such as Loop News make significant efforts to debunk false claims. These initiatives are essential but inadequate to effectively respond to prevailing false claims during crises.",
"cite_spans": [
{
"start": 547,
"end": 548,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we studied two research questions: RQ1: How do ML models trained in high-resource languages perform with current Caribbean false claims?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RQ2: Are more sophisticated ML techniques (e.g., Transfer Learning), useful to detect false claims in the Caribbean?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Note that the focus of our investigation is on the COVID-19 related false claims in the Caribbean islands. ML models trained in high-resource languages are not easily transferable to low-resource languages. One of the main challenges comes from data scarcity (i.e., lack of labeled training data in low-resource languages). This issue is further exacerbated by the application of false claims detection that suffers from imbalance (i.e., where the number of labeled false claims is significantly smaller than that of labeled true claims). Therefore, to thoroughly study false claims in the Caribbean Islands, more sophisticated ML techniques that address indigenous nuances need to be tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the onset of the COVID-19 pandemic, misinformation in different languages has been circulating on social media. The COVID-19 misinformation datasets can be roughly divided into two categories: monolingual and multilingual. CoAID , ReCOVery (Zhou et al., 2020) , CMU-MisCOV19 (Memon and Carley, 2020), CHECKED (Yang et al., 2021) and COSTRAINT task dataset (Patwa et al., 2020) are monolingual datasets in high-resource languages (English or Chinese). CoAID is a diverse COVID-19 misinformation dataset, including 5,216 news about COVID-19, and ground truth labels. Multilingual datasets contain news pieces in multiple languages. MM-COVID (Li et al., 2020) contains false & real news content in 6 different languages. FakeCovid (Shahi and Nandini, 2020) has 5,182 COVID-19 fact-checking news pieces in 40 languages.",
"cite_spans": [
{
"start": 246,
"end": 265,
"text": "(Zhou et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 315,
"end": 334,
"text": "(Yang et al., 2021)",
"ref_id": "BIBREF15"
},
{
"start": 362,
"end": 382,
"text": "(Patwa et al., 2020)",
"ref_id": null
},
{
"start": 645,
"end": 662,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "With the urge to combat the infodemic in developing countries or immigrant communities speaking low-resource languages, researchers have been studying how to transfer the pre-trained models on high-resource domains to low resource domains. Du et al. (2021) proposed a cross-lingual false claims detector called \"CrossFake\", which is trained based on a high-resource language (English) COVID-19 news corpus and used to pre-dict news credibility in a low-resource language (Chinese). Bang et al. (2021) proposed two model generalization methods on COVID-19 fake news for more robust fake news detection in different COVID-19 misinformation datasets. In this paper, we chose the false claim detection in the Caribbean region as a showcase. It is a challenging problem due to the multiculturalism and multilingualism of Caribbean people. We studied how to leverage the pre-trained models from high-resource regions (CoAID) to detect misinformation in a lowresource region (Caribbean false claim data).",
"cite_spans": [
{
"start": 240,
"end": 256,
"text": "Du et al. (2021)",
"ref_id": "BIBREF5"
},
{
"start": 482,
"end": 500,
"text": "Bang et al. (2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Main Proposal: Datasets and Research Questions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This investigation utilized CoAID, a high-resources language COVID-19 false claims dataset written in English and curated from the United States . CoAID corpus comprises of 260,037 claims and news articles . This study assessed CoAID's pre-trained baseline models ability to accurately detect false claims in Caribbean dataset, given indigenous data challenges such as scarcity and language barrier. Fact-checking institutions are trustworthy sources for determining the veracity of claims (Shu et al., 2019) . They use rigorous methods to investigate the veracity and correctness of assertions, including references and URLs where false claims originate (Shu et al., 2019) . Unfortunately, the Caribbean territory lacks these critical technological resources, notably fact-checking institutions with adequate regional data to combat the spread and growth of false claims. Instead, majority of fact-checking is primarily performed by respected Caribbean news outlets such as Loop News that do not consistently adhere to stringent fact-checking procedures. As a result, Caribbean fact-checked false claims are primarily assertions rarely linked to original content or the origin of such claims. This is the reason why we study Caribbean false claims detection in this work (Molina et al., 2021) .",
"cite_spans": [
{
"start": 490,
"end": 508,
"text": "(Shu et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 655,
"end": 673,
"text": "(Shu et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1272,
"end": 1293,
"text": "(Molina et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Caribbean Claims Dataset",
"sec_num": "3.1"
},
{
"text": "We manually crawled the accessible factchecking and news organization websites given the aforementioned status quo. Then, we extract only original assertions, or alternatively extract the annotated claims when the original assertions were inaccessible. See Table 1 for all web sources that are crawled. We further inspect the Caribbean web sources and solicited data from 9 institutions' websites detailed in Table 1 . The final dataset totaled 273 articles published mostly between 2019 and 2022. All data collected are COVID-19 claims except for two Dominican Republic vaccine-related health claims published in 2010. The corpus consists of 121 annotated news and 152 original news claims. The dataset covers 3 of 6 official languages spoken in the Caribbean: English, Spanish and French ( Table 2) . The labels are comprised of 54% real claims and 46% false claims (Table 4 ). See Table 4 for the character length distribution of the two labels. The contents of our Caribbean dataset contains language cues that help ML model distinguish between false and real claims .",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 409,
"end": 416,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 792,
"end": 800,
"text": "Table 2)",
"ref_id": "TABREF1"
},
{
"start": 868,
"end": 876,
"text": "(Table 4",
"ref_id": "TABREF5"
},
{
"start": 884,
"end": 891,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Caribbean Claims Dataset",
"sec_num": "3.1"
},
{
"text": "To establish a baseline, we used pre-trained models trained on a large amount of English moderated COVID-19 data. Since CoAID contains a large amount of English news claims in the United States , the baseline models were trained on CoAID. We sectionized RQ1 experiment into three sub tasks to ascertain empirical explainability. Each task uses different test sets to answer RQ1. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1: Baseline Model Performance on Caribbean False Claims",
"sec_num": "3.2"
},
{
"text": "This experiment adopted a self-supervised BERTbased transformer model, pre-trained on a large corpus of monolingual data. We encode the news using BERT. We adopt the binary cross-entropy loss function in the training. We fine-tuned the BERT model using the CoAID dataset and used it to conduct RQ2 experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "Our hypothesis is that the answer to RQ1 will not be sufficient to solve the task of detecting false claims accurately in Caribbean languages. Therefore, we propose a more sophisticated method to improve model's performance. Specifically, we studied the performance of transfer learning using a pre-trained BERT model. We break RQ2 experiment in two tasks to answer this question and maintain empirical consistency with RQ1 experiments. 1. CoAID Test Set: this is only used for RQ1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "2. Original Caribbean English Set: this is used for RQ1: Task II and RQ2: Task IV (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 92,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "3. Translated-English Caribbean Set: this is used for RQ2: Task III and RQ2: Task V (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 94,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "Given the unique challenges with Caribbean false claims data, this research selected five baseline models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "\u2022 Long short-term memory (LSTM)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "\u2022 Bidirectional Gated Recurrent Unit (BiGRU) (Bahdanau et al., 2015) \u2022 Recurrent Neural Network (RNN)",
"cite_spans": [
{
"start": 45,
"end": 68,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "\u2022 Convolutional Neural Network (CNN) \u2022 Random Forest (RF)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "The framework overview is shown in Figure 1 . For the first task in RQ1, we first encode the news using GloVe (Pennington et al., 2014) , a language pre-training model, and fit the embeddings into the models. The Glove wordembedding is used for all the baseline models except for Random Forest, which encodes the text with TF-IDF.",
"cite_spans": [
{
"start": 110,
"end": 135,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "The baseline models were evaluated using F1, Kappa and Precision-Recall Area Under the Curve (PR AUC) scores from the models' output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "1. Area Under the Precision-Recall Curve (PR-AUC):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "PR-AUC = n k=1 Prec(k)\u2206Rec(k),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "where k is the k-th precision and recall operating point (Prec(k), Rec(k)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "2. F1 Score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "F1 Score = 2 \u2022 (Prec \u2022 Rec)/(Prec + Rec),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "where Prec is precision and Rec is recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "3. Cohen's Kappa:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "\u03ba = (p o \u2212 p e )/(1 \u2212 p e ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "where p o is the observed agreement (identical to accuracy), and p e is the expected agreement, which is probabilities of randomly seeing each category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "One of our primary interests is the precisionrecall of the positive class, which is the positive false claim classification in our assessment of the models' performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "We implement all models with Keras. The train and test sets use the 75:25 ratio, respectively. For all models, we use RMSProp (Hinton et al., 2012) with a mini-batch of 50 and the training epoch is 30. In order to have a fair comparison, we set the hidden dimension as 100 for all models. For the pre-trained BERT model, we use a BERT base model 4 (uncased) pre-trained on a large corpus of English data. All methods are trained on an Ubuntu 20.04 and Nvidia Tesla K80 GPU.",
"cite_spans": [
{
"start": 126,
"end": 147,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2: Applying Transfer Learning",
"sec_num": "3.3"
},
{
"text": "First, to establish the research baseline performance, we pre-trained machine learning models on CoAID claims in English and tested them on English Caribbean false claims. Task IV encompasses running English Caribbean news claims through the refined BERT model and assessing its performance. The result from this experiment shows that transfer-learning with BERT out-performed Task II for RQ1 models which used the same dataset detailed in table 3. The BERT model's F1 score is 0.55, whereas Task II for RQ1 top F1 score is 0.54. Also, BERT's PR AUC score is 0.59, whereas Task II for RQ1 top PR AUC is 0.56. However, BERT Kappa score of -0.16 was less than Task II for RQ1 score, 0.02. Transfer learning technique using BERT achieved better predictive performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Finally, in the Task V, we assessed the pretrained, fine-tuned BERT model's ability to accurately predict Caribbean false claims translated from French/Spanish to English. The results from this experiment indicate that BERT transferlearning out-performs Task III for RQ1 models which basically used the same dataset detailed in table 3. The BERT model's F1 score is 0.55, whereas Task III for RQ1 top F1 score is 0.52. Also, BERT's PR AUC score is 0.57, whereas Task III for RQ1 top PR AUC is 0.55. However, BERT Kappa score of -0.17 was less than Task III for RQ1 score, 0.02. CoAID baseline models are resilient with classifying claims despite imbalance dataset with majority real claims. The CNN PR AUC score was approximately 76% accurate in predicting the minority false claims regardless of imbalanced binary classification in the dataset. This suggest that CoAID high-resource language models perform fairly well at predicting news claims curated from the US highresource language settings. RQ1: Task II. assessed CoAID models' ability to accurately detect Caribbean news claims originally written in English. When classifying Caribbean news claims in English, we observed an overall performance decline in all models. Thus, this outcome suggests that pre-trained high-resource detection models perform poorly on low-resource language context data written in En- assessed CoAID models' ability to accurately detect Caribbean news claims translated to English. When claims translated to English, pre-trained high-resource detection models under-perform on low-resource language context data. These results suggest a language translation loss. We propose the term language translation loss to encapsulate the phenomena that occur when a model's predictive power decreases due to translation nuances. Examples are politically loaded COVID-19 false claims propaganda and slang hidden in datasets that weaken signals impacting ML models' predictive power. RQ1 Summary. RQ1 results show a steady decline in all models' performance when introduced to Caribbean news claims that are originally written or translated to English (see Fig 3 & 2) . These findings are clear indicators that high-resource language ML models are substandard with detecting low-resource language false claims such as the Caribbean region news claims data. These findings validated the research hypothesis: high-resource language models are not appropriate for detecting COVID-19 false claims in diverse, low-resources ",
"cite_spans": [],
"ref_spans": [
{
"start": 2131,
"end": 2141,
"text": "Fig 3 & 2)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "The above results prompted the need for more robust, novel, and clever techniques to best address the nuances and false claims phenomena specific to the Caribbean. Thus, we experiment with transfer learning methodology to garner insight on Caribbean false claims detection challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2 Experiments",
"sec_num": "5.2"
},
{
"text": "RQ2: Task IV & V assessed transfer learning technique on Caribbean false claims detection. Task IV results indicate that the transfer learning technique using BERT achieved better predictive performance than English pre-trained high-resource language models. Similarly, Task V data demonstrate that the transfer learning technique achieves better model performance. Given indigenous Caribbean data challenges, these findings indicate that advance ML techniques have better learning mechanisms to address low-resource language setting detection (see Fig 4 & 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 559,
"text": "Fig 4 & 5)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "RQ2 Experiments",
"sec_num": "5.2"
},
{
"text": "RQ2 Summary: results give clear indication that sophisticated, refined ML approaches achieve better performance. Transfer learning is shown to optimize performance with addressing Caribbean data scarcity issues. The linguistic similarity between CoAID and Caribbean false claims leveraged the model's performance through transfer learning. curate Caribbean false claims data. These institutions have limited data covering only a few islands. Loop news has the largest coverage and quantity of Fact-checked news claims compared to other sources. Although news outlets have more data, fact-checking institutions have better quality data. News outlet organizations do their best to verify and debunk false claims. In the Caribbean region there is need for more rigorous processes for false claims fact checking (Seo et al., 2022) . This initiatives can be establish by non-government organization (NGOs) such as the Pan American Health Organization (PAHO) and Caribbean Public Health Agency.",
"cite_spans": [
{
"start": 808,
"end": 826,
"text": "(Seo et al., 2022)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2 Experiments",
"sec_num": "5.2"
},
{
"text": "This research did not address data imbalances in Caribbean data, which can be addressed by fu- ture work using state-of-the-art techniques. Future studies can focus in developing or utilizing interesting AI techniques such as meta-transfer learning, data augmentation techniques and Multilingual Bert transformer model to address false claims propagation in the Caribbean low-resource setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2 Experiments",
"sec_num": "5.2"
},
{
"text": "Context is imperative when considering computational solutions to address low-resource language setting false claims phenomena. In the Caribbean region context, numerous barriers implicate false claims detection when using high-resources language ML models. These barriers include: language, data scarcity, and rare full-coverage factchecking institutions. Such barriers are not researched and thus poorly understood. This suggest the need for more exploratory studies to have in depth understanding of the false claims phenomena in the Caribbean region.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ2 Experiments",
"sec_num": "5.2"
},
{
"text": "High-resource detection models have low accuracy with classifying Caribbean false claims data. Region-specific data challenges have shown to reduce the performance of high-resource ML models. This encourages the use of sophisticated ML techniques and AI methodologies to capture signals that current models are unable to recognize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our experiment with transfer learning has shown improvements with ML models' performance. The findings in this research support our hypothesis: high-resource language model performs poorly on low-resource language data. Future studies need to focus efforts on improving false claims detection in the Caribbean. A major challenge is that every island has its unique Creole, which complicates ML models trained in formal settings. Since the Jamaican languages are a combination of several languages, even the best language translator are ineffective in accurately translating the language to English. This poses another difficulty to the problem of false claims detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "False claims are the greatest threat to public health in the Caribbean and globally. As we saw with COVID19, if we do not address false claims, epidemic/pandemic diseases will spread exponentially (Brainard and Hunter, 2020) .",
"cite_spans": [
{
"start": 197,
"end": 224,
"text": "(Brainard and Hunter, 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://www.who.int/news-room/featurestories/detail/immunizing-the-public-against-misinformation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://studyincaribbean.com/about-caribbean.html 3 https://www.caribbeanandco.com/caribbean-languages/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/bert-base-uncased",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Research ImplicationNews outlet websites, Factcheckcaribbean.com and Poynter.com are most reputable organizations to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by NSF awards #1820609, 915801, and #2114824.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Model generalization on covid-19 fake news detection",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Bang",
"suffix": ""
},
{
"first": "Etsuko",
"middle": [],
"last": "Ishii",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Cahyawijaya",
"suffix": ""
},
{
"first": "Ziwei",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2021,
"venue": "International Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation",
"volume": "",
"issue": "",
"pages": "128--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Bang, Etsuko Ishii, Samuel Cahyawijaya, Ziwei Ji, and Pascale Fung. 2021. Model generalization on covid-19 fake news detection. In International Workshop on Combating Online Hostile Posts in Re- gional Languages during Emergency Situation, pages 128-140. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Misinformation making a disease outbreak worse: outcomes compared for influenza, monkeypox, and norovirus",
"authors": [
{
"first": "Julii",
"middle": [],
"last": "Brainard",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hunter",
"suffix": ""
}
],
"year": 2020,
"venue": "Simulation",
"volume": "96",
"issue": "4",
"pages": "365--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julii Brainard and Paul R Hunter. 2020. Misinforma- tion making a disease outbreak worse: outcomes compared for influenza, monkeypox, and norovirus. Simulation, 96(4):365-374.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coaid: Covid-19 healthcare misinformation dataset",
"authors": [
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.00885"
]
},
"num": null,
"urls": [],
"raw_text": "Limeng Cui and Dongwon Lee. 2020. Coaid: Covid-19 healthcare misinformation dataset. arXiv preprint arXiv:2006.00885.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deterrent: Knowledge guided graph attention network for detecting healthcare misinformation",
"authors": [
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Haeseung",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Tabar",
"suffix": ""
},
{
"first": "Fenglong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining",
"volume": "",
"issue": "",
"pages": "492--502",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limeng Cui, Haeseung Seo, Maryam Tabar, Fenglong Ma, Suhang Wang, and Dongwon Lee. 2020. Deter- rent: Knowledge guided graph attention network for detecting healthcare misinformation. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 492- 502.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cross-lingual covid-19 fake news detection",
"authors": [
{
"first": "Jiangshu",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Yingtong",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Congying",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "S Yu",
"middle": [],
"last": "Philip",
"suffix": ""
}
],
"year": 2021,
"venue": "2021 International Conference on Data Mining Workshops (ICDMW)",
"volume": "",
"issue": "",
"pages": "859--862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangshu Du, Yingtong Dou, Congying Xia, Limeng Cui, Jing Ma, and S Yu Philip. 2021. Cross-lingual covid-19 fake news detection. In 2021 International Conference on Data Mining Workshops (ICDMW), pages 859-862. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural networks for machine learning lecture 6a overview of mini-batch gradient descent",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. 2012. Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. 14:8.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Mm-covid: A multilingual and multimodal data repository for combating covid-19 disinformation",
"authors": [
{
"first": "Yichuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bohan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.04088"
]
},
"num": null,
"urls": [],
"raw_text": "Yichuan Li, Bohan Jiang, Kai Shu, and Huan Liu. 2020. Mm-covid: A multilingual and multimodal data repository for combating covid-19 disinforma- tion. arXiv preprint arXiv:2011.04088.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Characterizing covid-19 misinformation communities using a novel twitter dataset",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Shahan",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"M"
],
"last": "Memon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carley",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.00791"
]
},
"num": null,
"urls": [],
"raw_text": "Shahan Ali Memon and Kathleen M Carley. 2020. Characterizing covid-19 misinformation communi- ties using a novel twitter dataset. arXiv preprint arXiv:2008.00791.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "fake news\" is not simply false information: A concept explication and taxonomy of online content",
"authors": [
{
"first": "D",
"middle": [],
"last": "Maria",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Molina",
"suffix": ""
},
{
"first": "Thai",
"middle": [],
"last": "Sundar",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2021,
"venue": "American behavioral scientist",
"volume": "65",
"issue": "2",
"pages": "180--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria D Molina, S Shyam Sundar, Thai Le, and Dong- won Lee. 2021. \"fake news\" is not simply false information: A concept explication and taxonomy of online content. American behavioral scientist, 65(2):180-212.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Amitava Das, and Tanmoy Chakraborty. 2020. Fighting an infodemic: Covid-19 fake news dataset",
"authors": [
{
"first": "Parth",
"middle": [],
"last": "Patwa",
"suffix": ""
},
{
"first": "Shivam",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Pykl",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Vineeth",
"middle": [],
"last": "Guptha",
"suffix": ""
},
{
"first": "Gitanjali",
"middle": [],
"last": "Kumari",
"suffix": ""
},
{
"first": "Shad",
"middle": [],
"last": "Md",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ekbal",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parth Patwa, Shivam Sharma, Srinivas PYKL, Vineeth Guptha, Gitanjali Kumari, Md Shad Akhtar, Asif Ekbal, Amitava Das, and Tanmoy Chakraborty. 2020. Fighting an infodemic: Covid-19 fake news dataset.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "If you see a reliable source, say something: Effects of correction comments on covid-19 misinformation",
"authors": [
{
"first": "Haeseung",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aiping",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Sian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haeseung Seo, Aiping Xiong, Sian Lee, and Dongwon Lee. 2022. If you see a reliable source, say some- thing: Effects of correction comments on covid-19 misinformation. In Proceedings of the AAAI Confer- ence on Web and Social Media.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fakecovid-a multilingual cross-domain fact check news dataset for covid-19",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Gautam",
"suffix": ""
},
{
"first": "Durgesh",
"middle": [],
"last": "Shahi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nandini",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.11343"
]
},
"num": null,
"urls": [],
"raw_text": "Gautam Kishore Shahi and Durgesh Nandini. 2020. Fakecovid-a multilingual cross-domain fact check news dataset for covid-19. arXiv preprint arXiv:2006.11343.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "defend: Explainable fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining",
"volume": "",
"issue": "",
"pages": "395--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 395-405.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Checked: Chinese covid-19 fake news dataset. Social Network Analysis and Mining",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xinyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "11",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yang, Xinyi Zhou, and Reza Zafarani. 2021. Checked: Chinese covid-19 fake news dataset. So- cial Network Analysis and Mining, 11(1):1-8.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recovery: A multimodal repository for covid-19 news credibility research",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Mulay",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Ferrara",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 29th ACM international conference on information & knowledge management",
"volume": "",
"issue": "",
"pages": "3205--3212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Zhou, Apurva Mulay, Emilio Ferrara, and Reza Zafarani. 2020. Recovery: A multimodal repository for covid-19 news credibility research. In Proceed- ings of the 29th ACM international conference on information & knowledge management, pages 3205- 3212.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Overview of RQ1 ML models' performance from Tasks I to III. The box plot shows that decline in ML models' performance on Caribbean data language settings."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Overview of RQ1 Models Evaluation Matrices from Tasks I to III. This box plot is shows a decline in performance using F1, Kappa and PR AUC."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Overview of RQ1 ML models' performance matrices scores compared to RQ2 scores. This bar chat compares the performance of CoAID RQ1: Task II models performance with RQ2: Task IV fine-tuned Bert transformer model. This graph shows that transfer learning achieves better performance."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Overview of RQ1 performance compared to RQ2. This bar chat compares the performance of CoAID RQ1: Task III models with RQ2: Task V finetuned Bert transformer model. This graph shows that transfer learning via Bert achieves better performance."
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"text": "Web sources and news claim articles curated from each source",
"content": "<table><tr><td>Institution</td><td>Source Name</td><td># Articles</td></tr><tr><td colspan=\"3\">News Outlet Loop News Outlet Diario Libre News Outlet Aljazeera News Outlet St. Lucas Times News Outlet GBN News Outlet St. Vincent Times 3 188 35 25 7 3 News Outlet Barbados Today 2 News Outlet Mikey LiVE 1</td></tr><tr><td colspan=\"2\">Fact-checker Poynter</td><td>9</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "The language composition of the curated Caribbean dataset.",
"content": "<table><tr><td colspan=\"2\">Language Qty.</td><td>%</td></tr><tr><td>English Spanish French</td><td colspan=\"2\">171 63% 66 24% 36 7%</td></tr></table>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"text": "Task I Get the baseline performance using the CoAID dataset . Test set is CoAID dataset.Task II Assess CoAID models' ability to predict Caribbean English false claims. Test set is Caribbean English claims. Task III Assess the baseline model with another English Caribbean claims translated from Spanish and French. Test set is a translated to English Caribbean claims dataset .",
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"text": "Task IV Assess fine-tuned BERT model's ability to predict Caribbean English false claims. Test set is Caribbean English claims.",
"content": "<table><tr><td>Task V Assess the fine-tuned BERT model with an-</td></tr><tr><td>other English Caribbean claims translated</td></tr><tr><td>from Spanish and French. Test set is a trans-</td></tr><tr><td>lated to English Caribbean claims dataset .</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"text": "Caribbean dataset composition of false and real news by RQs tasks respectively",
"content": "<table><tr><td>RQS Tasks</td><td>Claims</td><td colspan=\"2\">False Real Total</td></tr><tr><td colspan=\"3\">RQ1: T2 &amp; RQ2: T4 Original-En RQ1: T3 &amp; RQ2: T5 Translated-En 52 95</td><td>76 50</td><td>171 102</td></tr><tr><td colspan=\"2\">4 Empirical Evaluation</td><td/></tr><tr><td>4.1 Set-Up</td><td/><td/></tr><tr><td colspan=\"3\">This research has three main test sets.</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"text": "Dataset statisticsCorpusSize Min char Mean char Max char",
"content": "<table><tr><td>Real claims 126 67</td><td>1187</td><td>3141</td></tr><tr><td>false claims 147 26</td><td>183</td><td>969</td></tr><tr><td>97</td><td/><td/></tr></table>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"num": null,
"text": "F1: 0. 33 -0.54, Kappa: -0.64 -0.02 and PR AUC: 0.51 -0.56. LSTM outperformed all models with F1 while RNN having the highest Kappa and PR AUC scores.",
"content": "<table><tr><td>details the</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"text": "Comparison on Task I for RQ1. The false claims classification performance with standard deviation across five runs. The final prediction denotes the average of each evaluation matrix's score from all runs. The results in this table show that LSTM has the best F1 & Kappa scores, while CNN has the highest PR AUC score.",
"content": "<table><tr><td>Model</td><td>F1</td><td>Kappa</td><td>PR AUC</td></tr><tr><td colspan=\"4\">LSTM 0.5991 0.060 0.5721 0.062 0.6923 0.032 BiGRU 0.5708 0.062 0.5457 0.062 0.6792 0.026 RNN 0.4147 0.188 0.3950 0.186 0.6651 0.074 CNN 0.5326 0.181 0.510 0.178 0.7565 0.097 RF 0.3439 0.121 0.3261 0.118 0.6152 0.085</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"html": null,
"num": null,
"text": "Comparison on RQ1 Task II.",
"content": "<table><tr><td colspan=\"4\">The false claims classification performance with standard devia-tion across five runs. The final prediction denotes the average of each evaluation matrix's score from all runs. This experiment shows an overall performance declined observed compared to Task I baseline models output in table 5.</td></tr><tr><td>Model</td><td>F1</td><td>Kappa</td><td>PR AUC</td></tr><tr><td colspan=\"4\">LSTM 0.5405 0.059 -0.0704 0.099 0.5361 0.042 BiGRU 0.5020 0.056 -0.3164 0.139 0.4632 0.049 RNN 0.2013 0.120 0.0213 0.027 0.5603 0.040 CNN 0.3574 0.134 -0.1864 0.200 0.5151 0.045 RF 0.3316 0.012 -0.6427 0.015 0.5121 0.008</td></tr><tr><td colspan=\"2\">5 Discussion</td><td/><td/></tr><tr><td colspan=\"2\">5.1 RQ1 Experiments</td><td/><td/></tr><tr><td colspan=\"4\">RQ1: Task I. We established our baseline per-formance. It is clear from Task I results that</td></tr></table>"
},
"TABREF9": {
"type_str": "table",
"html": null,
"num": null,
"text": "Comparison on Task III for RQ1. The false claims classification performance with standard deviation across five runs. The final prediction denotes the average of each evaluation matrix's score from all runs. This experiment shows an overall performance declined observed compared toTask I baseline models output in table 6.",
"content": "<table><tr><td>Model</td><td>F1</td><td>Kappa</td><td>PR AUC</td></tr><tr><td colspan=\"4\">LSTM 0.4649 0.168 -0.0735 0.100 0.4990 0.089 BiGRU 0.5268 0.049 -0.1809 0.166 0.4954 0.018 RNN 0.2963 0.175 0.0226 0.114 0.5543 0.037 CNN 0.4884 0.097 -0.0830 0.175 0.5164 0.091 RF 0.3923 0.009 -0.5196 0.008 0.5384 0.007</td></tr></table>"
},
"TABREF10": {
"type_str": "table",
"html": null,
"num": null,
"text": "Comparison on Task IV & V for RQ2.",
"content": "<table><tr><td colspan=\"4\">The false claims classification performance with standard deviation across five runs. The final prediction denotes the average of each evaluation matrix's score from all runs. A performance increase was observed in these experiments compared to Task II &amp; III models output in table 6 and table 7 respectfully.</td></tr><tr><td>Task</td><td>F1</td><td>Kappa</td><td>PR AUC</td></tr><tr><td colspan=\"4\">Bert IV 0.5476 0.018 -0.1578 0.306 0.5852 0.113 Bert V 0.5485 0.047 -0.1656 0.039 0.5695 0.117</td></tr><tr><td>glish.</td><td/><td/><td/></tr><tr><td colspan=\"2\">RQ1: Task III.</td><td/><td/></tr></table>"
}
}
}
}