ACL-OCL / Base_JSON /prefixC /json /constraint /2022.constraint-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:12:41.955493Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The advent of Web 2.0 induced the evolution of what has traditionally been described as a \"participatory Web\". From pop-culture music to Black Friday becoming a global phenomenon, and movements like BlackLivesMatter turning into a powerful instrument of global resistance, the Internet and social media have played a pivotal role. As much as we relish the connectedness facilitated by social media, the sentient being in all of us cannot remain obscured by the perils of the unabated misuse of the very free speech that these platforms aim to empower. Within the shadows of a transparent yet anonymous social media, lurk those disguising themselves as pseudo-flag-bearers of free speech, and pounce on every opportunity they get to spread vile content, detrimental to society. Such miscreants are desperate to misuse those 280 character sound bites to further their anti-openness agendas in the form of hate speech, disinformation, and ill-intended propaganda. Such menace experiences flare-ups during emergency situations such as the COVID-19 outbreak and geopolitically conflicting global order. There have been numerous efforts toward addressing some of these problems computationally, but with evolving complexities of online harmful content, more robust solutions are needed. Some of these challenges stem from linguistic diversity, abstract semiotics, multimodality, anonymity of the real instigators, etc. Thus, there is a pressing need to start a discussion around such aspects, which are more inclusive than conventional efforts. With this in mind, and motivated by the success of the first edition of the CONSTRAINT Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation, we have launched the second edition in hybrid mode, with a special focus on Multimodal Low-Resource Language Processing to Combat COVID-19 Related Online Hostile Content. The workshop additionally highlighted three major points: 1. Regional languages: offensive posts may be written in low-resource regional languages, e.g., Tamil, Urdu, Bangali, Polish, Czech, Lithuanian, etc. 2. Emergency situations: The proposed solutions should be able to tackle misinformation during emergency situations where, due to the lack of enough historical data, machine learning models need to adopt additional intelligence to handle emerging and novel posts. 3. Early detection: Since the impact of misinformation during emergency situations can be highly detrimental to society (e.g., health-related misadvice during a pandemic may take human's life), we encourage solutions that can detect such hostile posts as early as possible after they have been posted in social media. Our workshop also features a shared task titled: Hero, Villain and Victim: Dissecting harmful memes for Semantic role labelling of entities. The objective is to determine the role of the entities referred to within a meme: hero vs. villain vs. victim vs. other. The meme is to be analyzed from the perspective of its author. The datasets released as part of this shared task span memes from two domains: COVID-19 and US Politics. The complex and engaging nature of the shared task led to a total of 6 unique final submissions for evaluation, from amongst 105 total registered participants. We accepted a total of ten papers: four for the regular track and six for the shared task. The workshop papers cover topics ranging from detecting multimodal/unimodal fake news (Choi et al., 2022; Lucas et al., 2022) to aggressive content (Sharif et al., 2022), with additional fine-grained analysis and sub-tasks like document retrieval towards mitigating misinformation (Sundriyal et al., 2022). On the other hand, the accepted papers for the shared task proposed various multimodal fusion strategies including state-ofthe-art encoder models such as variants of ViT, BERT, and CLIP (Nandi et al., 2022; Kun et al., 2022; Montariol et al., 2022), with ensembling playing a key role in the overall performance enhancement. Consequently, diverse strategies for addressing the task along with their limitations are elucidated via the contributions made hereupon.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "The advent of Web 2.0 induced the evolution of what has traditionally been described as a \"participatory Web\". From pop-culture music to Black Friday becoming a global phenomenon, and movements like BlackLivesMatter turning into a powerful instrument of global resistance, the Internet and social media have played a pivotal role. As much as we relish the connectedness facilitated by social media, the sentient being in all of us cannot remain obscured by the perils of the unabated misuse of the very free speech that these platforms aim to empower. Within the shadows of a transparent yet anonymous social media, lurk those disguising themselves as pseudo-flag-bearers of free speech, and pounce on every opportunity they get to spread vile content, detrimental to society. Such miscreants are desperate to misuse those 280 character sound bites to further their anti-openness agendas in the form of hate speech, disinformation, and ill-intended propaganda. Such menace experiences flare-ups during emergency situations such as the COVID-19 outbreak and geopolitically conflicting global order. There have been numerous efforts toward addressing some of these problems computationally, but with evolving complexities of online harmful content, more robust solutions are needed. Some of these challenges stem from linguistic diversity, abstract semiotics, multimodality, anonymity of the real instigators, etc. Thus, there is a pressing need to start a discussion around such aspects, which are more inclusive than conventional efforts. With this in mind, and motivated by the success of the first edition of the CONSTRAINT Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation, we have launched the second edition in hybrid mode, with a special focus on Multimodal Low-Resource Language Processing to Combat COVID-19 Related Online Hostile Content. The workshop additionally highlighted three major points: 1. Regional languages: offensive posts may be written in low-resource regional languages, e.g., Tamil, Urdu, Bangali, Polish, Czech, Lithuanian, etc. 2. Emergency situations: The proposed solutions should be able to tackle misinformation during emergency situations where, due to the lack of enough historical data, machine learning models need to adopt additional intelligence to handle emerging and novel posts. 3. Early detection: Since the impact of misinformation during emergency situations can be highly detrimental to society (e.g., health-related misadvice during a pandemic may take human's life), we encourage solutions that can detect such hostile posts as early as possible after they have been posted in social media. Our workshop also features a shared task titled: Hero, Villain and Victim: Dissecting harmful memes for Semantic role labelling of entities. The objective is to determine the role of the entities referred to within a meme: hero vs. villain vs. victim vs. other. The meme is to be analyzed from the perspective of its author. The datasets released as part of this shared task span memes from two domains: COVID-19 and US Politics. The complex and engaging nature of the shared task led to a total of 6 unique final submissions for evaluation, from amongst 105 total registered participants. We accepted a total of ten papers: four for the regular track and six for the shared task. The workshop papers cover topics ranging from detecting multimodal/unimodal fake news (Choi et al., 2022; Lucas et al., 2022) to aggressive content (Sharif et al., 2022), with additional fine-grained analysis and sub-tasks like document retrieval towards mitigating misinformation (Sundriyal et al., 2022). On the other hand, the accepted papers for the shared task proposed various multimodal fusion strategies including state-ofthe-art encoder models such as variants of ViT, BERT, and CLIP (Nandi et al., 2022; Kun et al., 2022; Montariol et al., 2022), with ensembling playing a key role in the overall performance enhancement. Consequently, diverse strategies for addressing the task along with their limitations are elucidated via the contributions made hereupon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language and Information Processing group at the Department of Computer Science and Technology at the University of Cambridge and a member of the European Lab for Learning and Intelligent Systems. We thank the authors and the task participants for their interest in the workshop. We would also like to thank the program committee for their help with reviewing the papers and with advertising the workshop. The work was partially supported by a Wipro research grant, Ramanujan Fellowship, the Infosys Centre for AI, IIIT Delhi, India, and ihub-Anubhuti-iiitd Foundation, set up under the NM-ICPS scheme of the Department of Science and Technology, India. It is also part of the Tanbih mega-project, which is developed at the Qatar Computing Research Institute, HBKU, and aims to limit the impact of fake news, propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "the CONSTRAINT 2022 Shared Task on Detecting the Hero, the Villain, and the Victim in Memes Shivam Sharma, Tharun Suresh",
"authors": [
{
"first": "Atharva",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Himanshi",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Md",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Shad",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "the CONSTRAINT 2022 Shared Task on Detecting the Hero, the Villain, and the Victim in Memes Shivam Sharma, Tharun Suresh, Atharva Kulkarni, Himanshi Mathur, Preslav Nakov, Md. Shad Akhtar and Tanmoy Chakraborty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "DD-TIG at Constraint@ACL2022: Multimodal Understanding and Reasoning for Role Labeling of Entities in Hateful Memes Ziming Zhou",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DD-TIG at Constraint@ACL2022: Multimodal Understanding and Reasoning for Role Labeling of Entities in Hateful Memes Ziming Zhou, Han Zhao, Jingjing Dong, Jun Gao and Xiaolong Liu . . . . . . . . . . . . . . . . . . . . . . . . 12",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "19 Logically at the Constraint 2022: Multimodal role labelling Ludovic",
"authors": [],
"year": null,
"venue": "A semantic role labelling approach for detecting harmful memes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Are you a hero or a villain? A semantic role labelling approach for detecting harmful memes. Shaik Fharook, Syed Sufyan Ahmed, Gurram Rithika, Sumith Sai Budde, Sunil Saumya and Shankar Biradar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Logically at the Constraint 2022: Multimodal role labelling Ludovic Kun, Jayesh Bankoti and David Kiskovski . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Combining Language Models and Linguistic Information to Label Entities in Memes Pranaydeep Singh, Aaron Maladry and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Combining Language Models and Linguistic Information to Label Entities in Memes Pranaydeep Singh, Aaron Maladry and Els Lefever . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Detecting the Role of an Entity in Harmful Memes: Techniques and their Limitations Rabindra Nath Nandi, Firoj Alam and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Detecting the Role of an Entity in Harmful Memes: Techniques and their Limitations Rabindra Nath Nandi, Firoj Alam and Preslav Nakov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance Syrielle Montariol,\u00c9tienne Simon, Arij Riabi and Djam\u00e9 Seddah",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance Syrielle Montariol,\u00c9tienne Simon, Arij Riabi and Djam\u00e9 Seddah . . . . . . . . . . . . . . . . . . . . . . . . . . 55",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Document Retrieval and Claim Verification to Mitigate COVID-19 Misinformation Megha Sundriyal, Ganeshan Malhotra, Md Shad Akhtar, Shubhashis Sengupta",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Document Retrieval and Claim Verification to Mitigate COVID-19 Misinformation Megha Sundriyal, Ganeshan Malhotra, Md Shad Akhtar, Shubhashis Sengupta, Andrew Fano and",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Multilabel Dataset for Detecting Aggressive Texts and Their Targets Omar Sharif, Eftekhar Hossain and Mohammed Moshiul Hoque",
"authors": [
{
"first": "M-Bad",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M-BAD: A Multilabel Dataset for Detecting Aggressive Texts and Their Targets Omar Sharif, Eftekhar Hossain and Mohammed Moshiul Hoque . . . . . . . . . . . . . . . . . . . . . . . . . . . 75",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "How does fake news use a thumbnail? CLIP-based Multimodal Detection on the Unrepresentative News Image Hyewon Choi",
"authors": [
{
"first": "Yejun",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Seunghyun",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Kunwoo",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "How does fake news use a thumbnail? CLIP-based Multimodal Detection on the Unrepresentative News Image Hyewon Choi, Yejun Yoon, Seunghyun Yoon and Kunwoo Park . . . . . . . . . . . . . . . . . . . . . . . . . . . 86",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Detecting False Claims in Low-Resource Regions: A Case Study of Caribbean Islands Jason Lucas",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Detecting False Claims in Low-Resource Regions: A Case Study of Caribbean Islands Jason Lucas, Limeng Cui, Thai Le and Dongwon Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"html": null,
"text": "The CONSTRAINT 2022 Organizers: Tanmoy Chakraborty, Md. Shad Akhtar, Kai Shu, H. Russell Bernard, Maria Liakata, and Preslav Nakov Website: http://lcs2.iiitd.edu.in/CONSTRAINT-2022/",
"num": null,
"content": "<table><tr><td>Organizing Committee Program Committee</td></tr><tr><td>Program Committee Chairs Program Committee</td></tr><tr><td>Tanmoy Chakraborty, IIIT Delhi, India Amila Silva, The University of Melbourne</td></tr><tr><td>Md. Shad Akhtar, IIIT Delhi, India Andreas Vlachos, University of Cambridge</td></tr><tr><td>Kai Shu, Illinois Institute of Technology, USA Anoop Kunchukuttan, Microsoft</td></tr><tr><td>H. Russell Bernard, Arizona State University, USA Arkaitz Zubiaga, Queen Mary University of London</td></tr><tr><td>Maria Liakata, Queen Mary, University of London, UK Balaji Vasan Srinivasan, Adobe Research</td></tr><tr><td>Preslav Nakov, Qatar Computing Research Institute, HBKU, Qatar Firoj Alam, Qatar Computing Research Institute, HBKU</td></tr><tr><td>Marc Spaniol, Universit\u00e9 de Caen</td></tr><tr><td>Matt Lease, University of Texas at Austin Web Chair Monojit Choudhury, Microsoft Research</td></tr><tr><td>Aseem Srivastava, IIIT Delhi, India Tracy King, Adobe Sensei and Search</td></tr><tr><td>Paolo Papotti, EURECOM</td></tr><tr><td>Paolo Rosso, Universitat Polit\u00e8cnica de Val\u00e8ncia</td></tr><tr><td>Invited Speakers Pushpak Bhattacharya, IIT Bombay</td></tr><tr><td>Isabelle Augenstein, University of Copenhagen, Denmark Roy Ka-Wei Lee, Singapore University of Technology and Design</td></tr><tr><td>Smaranda Muresan, Columbia University, USA Xinyi Zhou, Syracuse University</td></tr><tr><td>Andreas Vlachos, University of Cambridge, UK Yingtong Dou, University of Illinois at Chicago</td></tr><tr><td>Reza Zafarani, Syracuse University</td></tr><tr><td>Nitin Agarwal, University of Arkansas at Little Rock</td></tr><tr><td>Victoria Rubin, Western University</td></tr><tr><td>Francesco Barbieri, Snap Research</td></tr><tr><td>Ashique KhudaBukhsh, Carnegie Mellon University</td></tr><tr><td>Ugur Kursuncu, Georgia State University</td></tr><tr><td>Vagelis Papalexakis, University of California Riverside</td></tr><tr><td>Sibel Adali, Rensselaer Polytechnic Institute</td></tr><tr><td>Shivam Sharma, IIIT Delhi, Wipro AI Research</td></tr><tr><td>Chhavi Sharma, Wipro AI Research</td></tr><tr><td>Shivani Kumar, IIIT Delhi</td></tr><tr><td>Yash Kumar Atri, IIIT Delhi</td></tr><tr><td>Sarah Masud, IIIT Delhi</td></tr><tr><td>Sunil Saumya, IIIT Dharwad</td></tr><tr><td>Megha Sundriyal, IIIT Delhi</td></tr><tr><td>Karan Goyal, IIIT Delhi</td></tr><tr><td>Anam Fatima, IIIT Delhi</td></tr></table>"
}
}
}
}