bibtex_url
stringlengths 41
50
| bibtext
stringlengths 693
2.88k
| abstract
stringlengths 0
2k
| authors
listlengths 1
45
| title
stringlengths 21
206
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 9
12
⌀ |
---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.findings-acl.968.bib
|
@inproceedings{yue-etal-2024-fragrel,
title = "{F}rag{R}el: Exploiting Fragment-level Relations in the External Memory of Large Language Models",
author = "Yue, Xihang and
Zhu, Linchao and
Yang, Yi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.968",
pages = "16348--16361",
abstract = "To process contexts with unlimited length using Large Language Models (LLMs), recent studies explore hierarchically managing the long text. Only several text fragments are taken from the external memory and passed into the temporary working memory, i.e., LLM{'}s context window. However, existing approaches isolatedly handle the text fragments without considering their structural connections, thereby suffering limited capability on texts with intensive inter-relations, e.g., coherent stories and code repositories. This work attempts to resolve this by exploiting the fragment-level relations in external memory. First, we formulate the fragment-level relations and present several instantiations for different text types. Next, we introduce a relation-aware fragment assessment criteria upon previous independent fragment assessment. Finally, we present the fragment-connected Hierarchical Memory based LLM. We validate the benefits of involving these relations on long story understanding, repository-level code generation, and long-term chatting.",
}
|
To process contexts with unlimited length using Large Language Models (LLMs), recent studies explore hierarchically managing the long text. Only several text fragments are taken from the external memory and passed into the temporary working memory, i.e., LLM{'}s context window. However, existing approaches isolatedly handle the text fragments without considering their structural connections, thereby suffering limited capability on texts with intensive inter-relations, e.g., coherent stories and code repositories. This work attempts to resolve this by exploiting the fragment-level relations in external memory. First, we formulate the fragment-level relations and present several instantiations for different text types. Next, we introduce a relation-aware fragment assessment criteria upon previous independent fragment assessment. Finally, we present the fragment-connected Hierarchical Memory based LLM. We validate the benefits of involving these relations on long story understanding, repository-level code generation, and long-term chatting.
|
[
"Yue, Xihang",
"Zhu, Linchao",
"Yang, Yi"
] |
{F}rag{R}el: Exploiting Fragment-level Relations in the External Memory of Large Language Models
|
findings-acl.968
|
Poster
|
2406.03092v1
|
https://aclanthology.org/2024.findings-acl.969.bib
|
@inproceedings{meng-etal-2024-robustness,
title = "On the Robustness of Document-Level Relation Extraction Models to Entity Name Variations",
author = "Meng, Shiao and
Hu, Xuming and
Liu, Aiwei and
Ma, Fukun and
Yang, Yawen and
Li, Shuang and
Wen, Lijie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.969",
pages = "16362--16374",
abstract = "Driven by the demand for cross-sentence and large-scale relation extraction, document-level relation extraction (DocRE) has attracted increasing research interest. Despite the continuous improvement in performance, we find that existing DocRE models which initially perform well may make more mistakes when merely changing the entity names in the document, hindering the generalization to novel entity names. To this end, we systematically investigate the robustness of DocRE models to entity name variations in this work. We first propose a principled pipeline to generate entity-renamed documents by replacing the original entity names with names from Wikidata. By applying the pipeline to DocRED and Re-DocRED datasets, we construct two novel benchmarks named Env-DocRED and Env-Re-DocRED for robustness evaluation. Experimental results show that both three representative DocRE models and two in-context learned large language models consistently lack sufficient robustness to entity name variations, particularly on cross-sentence relation instances and documents with more entities. Finally, we propose an entity variation robust training method which not only improves the robustness of DocRE models but also enhances their understanding and reasoning capabilities. We further verify that the basic idea of this method can be effectively transferred to in-context learning for DocRE as well.",
}
|
Driven by the demand for cross-sentence and large-scale relation extraction, document-level relation extraction (DocRE) has attracted increasing research interest. Despite the continuous improvement in performance, we find that existing DocRE models which initially perform well may make more mistakes when merely changing the entity names in the document, hindering the generalization to novel entity names. To this end, we systematically investigate the robustness of DocRE models to entity name variations in this work. We first propose a principled pipeline to generate entity-renamed documents by replacing the original entity names with names from Wikidata. By applying the pipeline to DocRED and Re-DocRED datasets, we construct two novel benchmarks named Env-DocRED and Env-Re-DocRED for robustness evaluation. Experimental results show that both three representative DocRE models and two in-context learned large language models consistently lack sufficient robustness to entity name variations, particularly on cross-sentence relation instances and documents with more entities. Finally, we propose an entity variation robust training method which not only improves the robustness of DocRE models but also enhances their understanding and reasoning capabilities. We further verify that the basic idea of this method can be effectively transferred to in-context learning for DocRE as well.
|
[
"Meng, Shiao",
"Hu, Xuming",
"Liu, Aiwei",
"Ma, Fukun",
"Yang, Yawen",
"Li, Shuang",
"Wen, Lijie"
] |
On the Robustness of Document-Level Relation Extraction Models to Entity Name Variations
|
findings-acl.969
|
Poster
|
2406.07444v1
|
https://aclanthology.org/2024.findings-acl.970.bib
|
@inproceedings{hu-etal-2024-resemo,
title = "{RESEMO}: A Benchmark {C}hinese Dataset for Studying Responsive Emotion from Social Media Content",
author = "Hu, Bo and
Zhang, Meng and
Xie, Chenfei and
Tian, Yuanhe and
Song, Yan and
Mao, Zhendong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.970",
pages = "16375--16387",
abstract = "On social media platforms, users{'} emotions are triggered when they encounter particular content from other users,where such emotions are different from those that spontaneously emerged, owing to the {``}responsive{''} nature. Analyzing the aforementioned responsive emotions from user interactions is a task of significant importance for understanding human cognition, the mechanisms of emotion generation, and behavior on the Internet, etc. Performing the task with artificial intelligence generally requires human-annotated data to help train a well-performing system, while existing data resources do not cover this specific area, with none of them focusing on responsive emotion analysis. In this paper, we propose a Chinese dataset named ResEmo for responsive emotion analysis, including 3813 posts with 68,781 comments collected from Weibo, the largest social media platform in China. ResEmo contains three types of human annotations with respect to responsive emotions, namely, responsive relationship, responsive emotion cause, and responsive emotion category. Moreover, to test this dataset, we build large language model (LLM) baseline methods for responsive relation extraction, responsive emotion cause extraction, and responsive emotion detection, which show the potential of the proposed ResEmo being a benchmark for future studies on responsive emotions.",
}
|
On social media platforms, users{'} emotions are triggered when they encounter particular content from other users,where such emotions are different from those that spontaneously emerged, owing to the {``}responsive{''} nature. Analyzing the aforementioned responsive emotions from user interactions is a task of significant importance for understanding human cognition, the mechanisms of emotion generation, and behavior on the Internet, etc. Performing the task with artificial intelligence generally requires human-annotated data to help train a well-performing system, while existing data resources do not cover this specific area, with none of them focusing on responsive emotion analysis. In this paper, we propose a Chinese dataset named ResEmo for responsive emotion analysis, including 3813 posts with 68,781 comments collected from Weibo, the largest social media platform in China. ResEmo contains three types of human annotations with respect to responsive emotions, namely, responsive relationship, responsive emotion cause, and responsive emotion category. Moreover, to test this dataset, we build large language model (LLM) baseline methods for responsive relation extraction, responsive emotion cause extraction, and responsive emotion detection, which show the potential of the proposed ResEmo being a benchmark for future studies on responsive emotions.
|
[
"Hu, Bo",
"Zhang, Meng",
"Xie, Chenfei",
"Tian, Yuanhe",
"Song, Yan",
"Mao, Zhendong"
] |
{RESEMO}: A Benchmark {C}hinese Dataset for Studying Responsive Emotion from Social Media Content
|
findings-acl.970
|
Poster
|
1506.06021v1
|
https://aclanthology.org/2024.findings-acl.971.bib
|
@inproceedings{ryu-etal-2024-ehr,
title = "{EHR}-{S}eq{SQL} : A Sequential Text-to-{SQL} Dataset For Interactively Exploring Electronic Health Records",
author = "Ryu, Jaehee and
Cho, Seonhee and
Lee, Gyubok and
Choi, Edward",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.971",
pages = "16388--16407",
abstract = "In this paper, we introduce EHR-SeqSQL, a novel sequential text-to-SQL dataset for Electronic Health Record (EHR) databases. EHR-SeqSQL is designed to address critical yet underexplored aspects in text-to-SQL parsing: interactivity, compositionality, and efficiency. To the best of our knowledge, EHR-SeqSQL is not only the largest but also the first medical text-to-SQL dataset benchmark to include sequential and contextual questions. We provide a data split and the new test set designed to assess compositional generalization ability. Our experiments demonstrate the superiority of a multi-turn approach over a single-turn approach in learning compositionality. Additionally, our dataset integrates specially crafted tokens into SQL queries to improve execution efficiency. With EHR-SeqSQL, we aim to bridge the gap between practical needs and academic research in the text-to-SQL domain.",
}
|
In this paper, we introduce EHR-SeqSQL, a novel sequential text-to-SQL dataset for Electronic Health Record (EHR) databases. EHR-SeqSQL is designed to address critical yet underexplored aspects in text-to-SQL parsing: interactivity, compositionality, and efficiency. To the best of our knowledge, EHR-SeqSQL is not only the largest but also the first medical text-to-SQL dataset benchmark to include sequential and contextual questions. We provide a data split and the new test set designed to assess compositional generalization ability. Our experiments demonstrate the superiority of a multi-turn approach over a single-turn approach in learning compositionality. Additionally, our dataset integrates specially crafted tokens into SQL queries to improve execution efficiency. With EHR-SeqSQL, we aim to bridge the gap between practical needs and academic research in the text-to-SQL domain.
|
[
"Ryu, Jaehee",
"Cho, Seonhee",
"Lee, Gyubok",
"Choi, Edward"
] |
{EHR}-{S}eq{SQL} : A Sequential Text-to-{SQL} Dataset For Interactively Exploring Electronic Health Records
|
findings-acl.971
|
Poster
|
2406.00019v3
|
https://aclanthology.org/2024.findings-acl.972.bib
|
@inproceedings{wang-etal-2024-keep,
title = "{KEEP} {CHATTING}! An Attractive Dataset for Continuous Conversation Agents",
author = "Wang, Yihe and
Liu, Jin and
Wan, Yao and
Li, Yitong and
Liu, Zifeng and
Chen, Weipeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.972",
pages = "16408--16414",
abstract = "Ongoing chatting is an important step for conversational agents to build long-term connections with people. However, people tend to quickly lose interest in chatting if the conversational agent{'}s words are not engaging enough. In this paper, we present a novel task of increasing users{'} willingness to continue talking to the agent.We collect a dataset named ContinuousChat by: (i) collecting personas and revising them, and then expanding the personas to detailed-personas through experiences, daily life, future plans, or interesting stories; (ii) expanding detailed-personas into the dialogues, and inject emotions and feelings into them; (iii) rewriting the dialogues in specific styles through few-shot prompt, conditioning on handwritten style-specific examples.We benchmark LLMs on ContinuousChat Dataset using both fine-tuning and in-context learning settings. Experiments over publicly available models demonstrate that although there is substantial room for improvement in generating style-specific dialogues, our ContinuousChat dataset is valuable in guiding conversational agents to generate more attractive dialogues and increase users{'} willingness to continue the conversations.",
}
|
Ongoing chatting is an important step for conversational agents to build long-term connections with people. However, people tend to quickly lose interest in chatting if the conversational agent{'}s words are not engaging enough. In this paper, we present a novel task of increasing users{'} willingness to continue talking to the agent.We collect a dataset named ContinuousChat by: (i) collecting personas and revising them, and then expanding the personas to detailed-personas through experiences, daily life, future plans, or interesting stories; (ii) expanding detailed-personas into the dialogues, and inject emotions and feelings into them; (iii) rewriting the dialogues in specific styles through few-shot prompt, conditioning on handwritten style-specific examples.We benchmark LLMs on ContinuousChat Dataset using both fine-tuning and in-context learning settings. Experiments over publicly available models demonstrate that although there is substantial room for improvement in generating style-specific dialogues, our ContinuousChat dataset is valuable in guiding conversational agents to generate more attractive dialogues and increase users{'} willingness to continue the conversations.
|
[
"Wang, Yihe",
"Liu, Jin",
"Wan, Yao",
"Li, Yitong",
"Liu, Zifeng",
"Chen, Weipeng"
] |
{KEEP} {CHATTING}! An Attractive Dataset for Continuous Conversation Agents
|
findings-acl.972
|
Poster
|
2401.02978v1
|
https://aclanthology.org/2024.findings-acl.973.bib
|
@inproceedings{zhao-etal-2024-repair,
title = "{R}e{P}air: Automated Program Repair with Process-based Feedback",
author = "Zhao, Yuze and
Huang, Zhenya and
Ma, Yixiao and
Li, Rui and
Zhang, Kai and
Jiang, Hao and
Liu, Qi and
Zhu, Linbo and
Su, Yu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.973",
pages = "16415--16429",
abstract = "The gap between the trepidation of program reliability and the expense of repairs underscore the indispensability for Automated Program Repair (APR). APR is instrumental in transforming vulnerable programs into more robust ones, bolstering program reliability while simultaneously diminishing the financial burden of manual repairs. Commercial-scale language models (LM) have taken APR to unprecedented levels. However, due to the limitations of model capabilities by parameters, a one-step substantial modification may not achieve the desired effect for models with parameters less than 100B. Moreover, humans interact with the LLM through explicit prompts, which hinders the LLM from receiving feedback from compiler and test cases to automatically optimize its repair policies. Explicit prompts from humans not only increase additional manpower costs, but also pose potential misunderstandings between human{'}s intent and LMs.Based on the above considerations, we are exploring how to ensure small-scale LM still outperform through process supervision and feedback. We start by constructing a dataset named CodeNet4Repair, replete with multiple repair records, which supervises the fine-tuning of a foundational mode. Building upon the encouraging outcomes of reinforcement learning, we develop a reward model that serves as a critic, providing feedback for the fine-tuned LM{'}s action, progressively optimizing its policy. During inference, we require the LM to generate solutions iteratively until the repair effect no longer improves or hits the maximum step limit. The experimental results show that this process-based feedback not only outperforms larger outcome-based generation methods, but also nearly matches the performance of closed-source commercial large-scale LMs.",
}
|
The gap between the trepidation of program reliability and the expense of repairs underscore the indispensability for Automated Program Repair (APR). APR is instrumental in transforming vulnerable programs into more robust ones, bolstering program reliability while simultaneously diminishing the financial burden of manual repairs. Commercial-scale language models (LM) have taken APR to unprecedented levels. However, due to the limitations of model capabilities by parameters, a one-step substantial modification may not achieve the desired effect for models with parameters less than 100B. Moreover, humans interact with the LLM through explicit prompts, which hinders the LLM from receiving feedback from compiler and test cases to automatically optimize its repair policies. Explicit prompts from humans not only increase additional manpower costs, but also pose potential misunderstandings between human{'}s intent and LMs.Based on the above considerations, we are exploring how to ensure small-scale LM still outperform through process supervision and feedback. We start by constructing a dataset named CodeNet4Repair, replete with multiple repair records, which supervises the fine-tuning of a foundational mode. Building upon the encouraging outcomes of reinforcement learning, we develop a reward model that serves as a critic, providing feedback for the fine-tuned LM{'}s action, progressively optimizing its policy. During inference, we require the LM to generate solutions iteratively until the repair effect no longer improves or hits the maximum step limit. The experimental results show that this process-based feedback not only outperforms larger outcome-based generation methods, but also nearly matches the performance of closed-source commercial large-scale LMs.
|
[
"Zhao, Yuze",
"Huang, Zhenya",
"Ma, Yixiao",
"Li, Rui",
"Zhang, Kai",
"Jiang, Hao",
"Liu, Qi",
"Zhu, Linbo",
"Su, Yu"
] |
{R}e{P}air: Automated Program Repair with Process-based Feedback
|
findings-acl.973
|
Poster
|
2208.08235v1
|
https://aclanthology.org/2024.findings-acl.974.bib
|
@inproceedings{xu-etal-2024-concise,
title = "Concise and Precise Context Compression for Tool-Using Language Models",
author = "Xu, Yang and
Feng, Yunlong and
Mu, Honglin and
Hou, Yutai and
Li, Yitong and
Wang, Xinghao and
Zhong, Wanjun and
Li, Zhongyang and
Tu, Dandan and
Zhu, Qingfu and
Zhang, Min and
Che, Wanxiang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.974",
pages = "16430--16441",
abstract = "Through reading the documentation in the context, tool-using language models can dynamically extend their capability using external tools. The cost is that we have to input lengthy documentation every time the model needs to use the tool, occupying the input window as well as slowing down the decoding process.Given the progress in general-purpose compression, soft context compression is a suitable approach to alleviate the problem. However, when compressing tool documentation, existing methods suffer from the weaknesses of key information loss (specifically, tool/parameter name errors) and difficulty in adjusting the length of compressed sequences based on documentation lengths.To address these problems, we propose two strategies for compressing tool documentation into concise and precise summary sequences for tool-using language models. 1) Selective compression strategy mitigates key information loss by deliberately retaining key information as raw text tokens. 2) Block compression strategy involves dividing tool documentation into short chunks and then employing a fixed-length compression model to achieve variable-length compression. This strategy facilitates the flexible adjustment of the compression ratio.Results on API-Bank and APIBench show that our approach reaches a performance comparable to the upper-bound baseline under up to 16x compression ratio.",
}
|
Through reading the documentation in the context, tool-using language models can dynamically extend their capability using external tools. The cost is that we have to input lengthy documentation every time the model needs to use the tool, occupying the input window as well as slowing down the decoding process.Given the progress in general-purpose compression, soft context compression is a suitable approach to alleviate the problem. However, when compressing tool documentation, existing methods suffer from the weaknesses of key information loss (specifically, tool/parameter name errors) and difficulty in adjusting the length of compressed sequences based on documentation lengths.To address these problems, we propose two strategies for compressing tool documentation into concise and precise summary sequences for tool-using language models. 1) Selective compression strategy mitigates key information loss by deliberately retaining key information as raw text tokens. 2) Block compression strategy involves dividing tool documentation into short chunks and then employing a fixed-length compression model to achieve variable-length compression. This strategy facilitates the flexible adjustment of the compression ratio.Results on API-Bank and APIBench show that our approach reaches a performance comparable to the upper-bound baseline under up to 16x compression ratio.
|
[
"Xu, Yang",
"Feng, Yunlong",
"Mu, Honglin",
"Hou, Yutai",
"Li, Yitong",
"Wang, Xinghao",
"Zhong, Wanjun",
"Li, Zhongyang",
"Tu, D",
"an",
"Zhu, Qingfu",
"Zhang, Min",
"Che, Wanxiang"
] |
Concise and Precise Context Compression for Tool-Using Language Models
|
findings-acl.974
|
Poster
|
2407.02043v1
|
https://aclanthology.org/2024.findings-acl.975.bib
|
@inproceedings{elgaar-etal-2024-meddec,
title = "{M}ed{D}ec: A Dataset for Extracting Medical Decisions from Discharge Summaries",
author = "Elgaar, Mohamed and
Cheng, Jiali and
Vakil, Nidhi and
Amiri, Hadi and
Celi, Leo Anthony",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand and virtual meeting",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-acl.975",
pages = "16442--16455",
abstract = "Medical decisions directly impact individuals{'} health and well-being. Extracting decision spans from clinical notes plays a crucial role in understanding medical decision-making processes. In this paper, we develop a new dataset called {``}MedDec,{''} which contains clinical notes of eleven different phenotypes (diseases) annotated by ten types of medical decisions. We introduce the task of medical decision extraction, aiming to jointly extract and classify different types of medical decisions within clinical notes. We provide a comprehensive analysis of the dataset, develop a span detection model as a baseline for this task, evaluate recent span detection approaches, and employ a few metrics to measure the complexity of data samples. Our findings shed light on the complexities inherent in clinical decision extraction and enable future work in this area of research. The dataset and code are available through https://github.com/CLU-UML/MedDec.",
}
|
Medical decisions directly impact individuals{'} health and well-being. Extracting decision spans from clinical notes plays a crucial role in understanding medical decision-making processes. In this paper, we develop a new dataset called {``}MedDec,{''} which contains clinical notes of eleven different phenotypes (diseases) annotated by ten types of medical decisions. We introduce the task of medical decision extraction, aiming to jointly extract and classify different types of medical decisions within clinical notes. We provide a comprehensive analysis of the dataset, develop a span detection model as a baseline for this task, evaluate recent span detection approaches, and employ a few metrics to measure the complexity of data samples. Our findings shed light on the complexities inherent in clinical decision extraction and enable future work in this area of research. The dataset and code are available through https://github.com/CLU-UML/MedDec.
|
[
"Elgaar, Mohamed",
"Cheng, Jiali",
"Vakil, Nidhi",
"Amiri, Hadi",
"Celi, Leo Anthony"
] |
{M}ed{D}ec: A Dataset for Extracting Medical Decisions from Discharge Summaries
|
findings-acl.975
|
Poster
|
2305.15222v1
|
https://aclanthology.org/2024.alvr-1.1.bib
|
@inproceedings{schneider-biemann-2024-wismir3,
title = "{WISMIR}3: A Multi-Modal Dataset to Challenge Text-Image Retrieval Approaches",
author = "Schneider, Florian and
Biemann, Chris",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.1",
pages = "1--6",
abstract = "This paper presents WISMIR3, a multi-modal dataset comprising roughly 300K text-image pairs from Wikipedia. With a sophisticated automatic ETL pipeline, we scraped, filtered, and transformed the data so that WISMIR3 intrinsically differs from other popular text-image datasets like COCO and Flickr30k. We prove this difference by comparing various linguistic statistics between the three datasets computed using the pipeline. The primary purpose of WISMIR3 is to use it as a benchmark to challenge state-of-the-art text-image retrieval approaches, which already reach around 90{\%} Recall@5 scores on the mentioned popular datasets. Therefore, we ran several text-image retrieval experiments on our dataset using current models, which show that the models, in fact, perform significantly worse compared to evaluation results on COCO and Flickr30k. In addition, for each text-image pair, we release features computed by Faster-R-CNN and CLIP models. With this, we want to ease and motivate the use of the dataset for other researchers.",
}
|
This paper presents WISMIR3, a multi-modal dataset comprising roughly 300K text-image pairs from Wikipedia. With a sophisticated automatic ETL pipeline, we scraped, filtered, and transformed the data so that WISMIR3 intrinsically differs from other popular text-image datasets like COCO and Flickr30k. We prove this difference by comparing various linguistic statistics between the three datasets computed using the pipeline. The primary purpose of WISMIR3 is to use it as a benchmark to challenge state-of-the-art text-image retrieval approaches, which already reach around 90{\%} Recall@5 scores on the mentioned popular datasets. Therefore, we ran several text-image retrieval experiments on our dataset using current models, which show that the models, in fact, perform significantly worse compared to evaluation results on COCO and Flickr30k. In addition, for each text-image pair, we release features computed by Faster-R-CNN and CLIP models. With this, we want to ease and motivate the use of the dataset for other researchers.
|
[
"Schneider, Florian",
"Biemann, Chris"
] |
{WISMIR}3: A Multi-Modal Dataset to Challenge Text-Image Retrieval Approaches
|
alvr-1.1
|
Poster
|
1712.09550v2
|
https://aclanthology.org/2024.alvr-1.2.bib
|
@inproceedings{geigle-etal-2024-mblip,
title = "m{BLIP}: Efficient Bootstrapping of Multilingual Vision-{LLM}s",
author = "Geigle, Gregor and
Jain, Abhay and
Timofte, Radu and
Glava{\v{s}}, Goran",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.2",
pages = "7--25",
abstract = "Modular vision-language models (Vision-LLMs) align pretrained image encoders with (frozen) large language models (LLMs) and post-hoc condition LLMs to {`}understand{'} the image input. With the abundance of readily available high-quality English image-text data as well as strong monolingual English LLMs, the research focus has been on English-only Vision-LLMs. Multilingual vision-language models are still predominantly obtained via expensive end-to-end pretraining, resulting in comparatively smaller models, trained on limited multilingual image data supplemented with text-only multilingual corpora. We present mBLIP, the first Vision-LLM leveraging multilingual LLMs, which we obtain in a computationally efficient manner on consumer-level hardware. To this end, we \textit{re-align} an image encoder previously tuned to an English LLM to a new, multilingual LLM using only a few million multilingual training examples derived from a mix of vision-and-language tasks, which we obtain by machine-translating high-quality English data to 95 languages. On the IGLUE benchmark and XM3600, mBLIP yields results competitive with state-of-the-art models and it greatly outperforms strong English-only Vision-LLMs like Llava 1.5. We release our model, code, and train data at \url{https://github.com/gregor-ge/mBLIP}.",
}
|
Modular vision-language models (Vision-LLMs) align pretrained image encoders with (frozen) large language models (LLMs) and post-hoc condition LLMs to {`}understand{'} the image input. With the abundance of readily available high-quality English image-text data as well as strong monolingual English LLMs, the research focus has been on English-only Vision-LLMs. Multilingual vision-language models are still predominantly obtained via expensive end-to-end pretraining, resulting in comparatively smaller models, trained on limited multilingual image data supplemented with text-only multilingual corpora. We present mBLIP, the first Vision-LLM leveraging multilingual LLMs, which we obtain in a computationally efficient manner on consumer-level hardware. To this end, we \textit{re-align} an image encoder previously tuned to an English LLM to a new, multilingual LLM using only a few million multilingual training examples derived from a mix of vision-and-language tasks, which we obtain by machine-translating high-quality English data to 95 languages. On the IGLUE benchmark and XM3600, mBLIP yields results competitive with state-of-the-art models and it greatly outperforms strong English-only Vision-LLMs like Llava 1.5. We release our model, code, and train data at \url{https://github.com/gregor-ge/mBLIP}.
|
[
"Geigle, Gregor",
"Jain, Abhay",
"Timofte, Radu",
"Glava{\\v{s}}, Goran"
] |
m{BLIP}: Efficient Bootstrapping of Multilingual Vision-{LLM}s
|
alvr-1.2
|
Poster
|
2106.03469v2
|
https://aclanthology.org/2024.alvr-1.3.bib
|
@inproceedings{xia-etal-2024-lmpt,
title = "{LMPT}: Prompt Tuning with Class-Specific Embedding Loss for Long-Tailed Multi-Label Visual Recognition",
author = "Xia, Peng and
Xu, Di and
Hu, Ming and
Ju, Lie and
Ge, Zongyuan",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.3",
pages = "26--36",
abstract = "Long-tailed multi-label visual recognition (LTML) task is a highly challenging task due to the label co-occurrence and imbalanced data distribution. In this work, we propose a unified framework for LTML, namely prompt tuning with class-specific embedding loss (LMPT), capturing the semantic feature interactions between categories by combining text and image modality data and improving the performance synchronously on both head and tail classes. Specifically, LMPT introduces the embedding loss function with class-aware soft margin and re-weighting to learn class-specific contexts with the benefit of textual descriptions (captions), which could help establish semantic relationships between classes, especially between the head and tail classes. Furthermore, taking into account the class imbalance, the distribution-balanced loss is adopted as the classification loss function to further improve the performance on the tail classes without compromising head classes. Extensive experiments are conducted on VOC-LT and COCO-LT datasets, which demonstrates that our method significantly surpasses the previous state-of-the-art methods and zero-shot CLIP in LTML. Our codes are fully public at https://github.com/richard-peng-xia/LMPT.",
}
|
Long-tailed multi-label visual recognition (LTML) task is a highly challenging task due to the label co-occurrence and imbalanced data distribution. In this work, we propose a unified framework for LTML, namely prompt tuning with class-specific embedding loss (LMPT), capturing the semantic feature interactions between categories by combining text and image modality data and improving the performance synchronously on both head and tail classes. Specifically, LMPT introduces the embedding loss function with class-aware soft margin and re-weighting to learn class-specific contexts with the benefit of textual descriptions (captions), which could help establish semantic relationships between classes, especially between the head and tail classes. Furthermore, taking into account the class imbalance, the distribution-balanced loss is adopted as the classification loss function to further improve the performance on the tail classes without compromising head classes. Extensive experiments are conducted on VOC-LT and COCO-LT datasets, which demonstrates that our method significantly surpasses the previous state-of-the-art methods and zero-shot CLIP in LTML. Our codes are fully public at https://github.com/richard-peng-xia/LMPT.
|
[
"Xia, Peng",
"Xu, Di",
"Hu, Ming",
"Ju, Lie",
"Ge, Zongyuan"
] |
{LMPT}: Prompt Tuning with Class-Specific Embedding Loss for Long-Tailed Multi-Label Visual Recognition
|
alvr-1.3
|
Poster
|
2305.04536v2
|
https://aclanthology.org/2024.alvr-1.4.bib
|
@inproceedings{lovenia-etal-2024-negative,
title = "Negative Object Presence Evaluation ({NOPE}) to Measure Object Hallucination in Vision-Language Models",
author = "Lovenia, Holy and
Dai, Wenliang and
Cahyawijaya, Samuel and
Ji, Ziwei and
Fung, Pascale",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.4",
pages = "37--58",
abstract = "Object hallucination poses a significant challenge in vision-language (VL) models, often leading to the generation of nonsensical or unfaithful responses with non-existent objects. However, the absence of a general measurement for evaluating object hallucination in VL models has hindered our understanding and ability to mitigate this issue. In this work, we present NOPE (Negative Object Presence Evaluation), a novel benchmark designed to assess object hallucination in VL models through visual question answering (VQA). We propose a cost-effective and scalable approach utilizing large language models to generate 29.5k synthetic negative pronoun ($NegP$) data of high quality for NOPE. We extensively investigate the performance of 10 state-of-the-art VL models in discerning the non-existence of objects in visual questions, where the ground truth answers are denoted as (e.g., {``}none{''}). Additionally, we evaluate their standard performance on visual questions on 9 other VQA datasets. Through our experiments, we demonstrate that no VL model is immune to the vulnerability of object hallucination, as all models achieve accuracy below 10{\%} on $NegP$. Furthermore, we uncover that lexically diverse visual questions, question types with large scopes, and scene-relevant objects capitalize the risk of object hallucination in VL models.",
}
|
Object hallucination poses a significant challenge in vision-language (VL) models, often leading to the generation of nonsensical or unfaithful responses with non-existent objects. However, the absence of a general measurement for evaluating object hallucination in VL models has hindered our understanding and ability to mitigate this issue. In this work, we present NOPE (Negative Object Presence Evaluation), a novel benchmark designed to assess object hallucination in VL models through visual question answering (VQA). We propose a cost-effective and scalable approach utilizing large language models to generate 29.5k synthetic negative pronoun ($NegP$) data of high quality for NOPE. We extensively investigate the performance of 10 state-of-the-art VL models in discerning the non-existence of objects in visual questions, where the ground truth answers are denoted as (e.g., {``}none{''}). Additionally, we evaluate their standard performance on visual questions on 9 other VQA datasets. Through our experiments, we demonstrate that no VL model is immune to the vulnerability of object hallucination, as all models achieve accuracy below 10{\%} on $NegP$. Furthermore, we uncover that lexically diverse visual questions, question types with large scopes, and scene-relevant objects capitalize the risk of object hallucination in VL models.
|
[
"Lovenia, Holy",
"Dai, Wenliang",
"Cahyawijaya, Samuel",
"Ji, Ziwei",
"Fung, Pascale"
] |
Negative Object Presence Evaluation ({NOPE}) to Measure Object Hallucination in Vision-Language Models
|
alvr-1.4
|
Poster
|
2310.05338v2
|
https://aclanthology.org/2024.alvr-1.5.bib
|
@inproceedings{quantmeyer-etal-2024-clip,
title = "How and where does {CLIP} process negation?",
author = "Quantmeyer, Vincent and
Mosteiro, Pablo and
Gatt, Albert",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.5",
pages = "59--72",
abstract = "Various benchmarks have been proposed to test linguistic understanding in pre-trained vision {\&} language (VL) models. Here we build on the existence task from the VALSE benchmark (Parcalabescu et al., 2022) which we use to test models{'} understanding of negation, a particularly interesting issue for multimodal models. However, while such VL benchmarks are useful for measuring model performance, they do not reveal anything about the internal processes through which these models arrive at their outputs in such visio-linguistic tasks. We take inspiration from the growing literature on model interpretability to explain the behaviour of VL models on the understanding of negation. Specifically, we approach these questions through an in-depth analysis of the text encoder in CLIP (Radford et al., 2021), a highly influential VL model. We localise parts of the encoder that process negation and analyse the role of attention heads in this task. Our contributions are threefold. We demonstrate how methods from the language model interpretability literature (e.g., causal tracing) can be translated to multimodal models and tasks; we provide concrete insights into how CLIP processes negation on the VALSE existence task; and we highlight inherent limitations in the VALSE dataset as a benchmark for linguistic understanding.",
}
|
Various benchmarks have been proposed to test linguistic understanding in pre-trained vision {\&} language (VL) models. Here we build on the existence task from the VALSE benchmark (Parcalabescu et al., 2022) which we use to test models{'} understanding of negation, a particularly interesting issue for multimodal models. However, while such VL benchmarks are useful for measuring model performance, they do not reveal anything about the internal processes through which these models arrive at their outputs in such visio-linguistic tasks. We take inspiration from the growing literature on model interpretability to explain the behaviour of VL models on the understanding of negation. Specifically, we approach these questions through an in-depth analysis of the text encoder in CLIP (Radford et al., 2021), a highly influential VL model. We localise parts of the encoder that process negation and analyse the role of attention heads in this task. Our contributions are threefold. We demonstrate how methods from the language model interpretability literature (e.g., causal tracing) can be translated to multimodal models and tasks; we provide concrete insights into how CLIP processes negation on the VALSE existence task; and we highlight inherent limitations in the VALSE dataset as a benchmark for linguistic understanding.
|
[
"Quantmeyer, Vincent",
"Mosteiro, Pablo",
"Gatt, Albert"
] |
How and where does {CLIP} process negation?
|
alvr-1.5
|
Poster
|
2407.10488v1
|
https://aclanthology.org/2024.alvr-1.6.bib
|
@inproceedings{nikandrou-etal-2024-enhancing,
title = "Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature Distillation",
author = "Nikandrou, Malvina and
Pantazopoulos, Georgios and
Konstas, Ioannis and
Suglia, Alessandro",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.6",
pages = "73--85",
abstract = "Continual learning focuses on incrementally training a model on a sequence of tasks with the aim of learning new tasks while minimizing performance drop on previous tasks. Existing approaches at the intersection of Continual Learning and Visual Question Answering (VQA) do not study how the multimodal nature of the input affects the learning dynamics of a model. In this paper, we demonstrate that each modality evolves at different rates across a continuum of tasks and that this behavior occurs in established encoder-only models as well as modern recipes for developing Vision {\&} Language (VL) models. Motivated by this observation, we propose a modality-aware feature distillation (MAFED) approach which outperforms existing baselines across models of varying scale in three multimodal continual learning settings. Furthermore, we provide ablations showcasing that modality-aware distillation complements experience replay. Overall, our results emphasize the importance of addressing modality-specific dynamics to prevent forgetting in multimodal continual learning.",
}
|
Continual learning focuses on incrementally training a model on a sequence of tasks with the aim of learning new tasks while minimizing performance drop on previous tasks. Existing approaches at the intersection of Continual Learning and Visual Question Answering (VQA) do not study how the multimodal nature of the input affects the learning dynamics of a model. In this paper, we demonstrate that each modality evolves at different rates across a continuum of tasks and that this behavior occurs in established encoder-only models as well as modern recipes for developing Vision {\&} Language (VL) models. Motivated by this observation, we propose a modality-aware feature distillation (MAFED) approach which outperforms existing baselines across models of varying scale in three multimodal continual learning settings. Furthermore, we provide ablations showcasing that modality-aware distillation complements experience replay. Overall, our results emphasize the importance of addressing modality-specific dynamics to prevent forgetting in multimodal continual learning.
|
[
"Nik",
"rou, Malvina",
"Pantazopoulos, Georgios",
"Konstas, Ioannis",
"Suglia, Aless",
"ro"
] |
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature Distillation
|
alvr-1.6
|
Poster
|
2406.19297v1
|
https://aclanthology.org/2024.alvr-1.7.bib
|
@inproceedings{teramen-etal-2024-english,
title = "{E}nglish-to-{J}apanese Multimodal Machine Translation Based on Image-Text Matching of Lecture Videos",
author = "Teramen, Ayu and
Ohtsuka, Takumi and
Kondo, Risa and
Kajiwara, Tomoyuki and
Ninomiya, Takashi",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.7",
pages = "86--91",
abstract = "We work on a multimodal machine translation of the audio contained in English lecture videos to generate Japanese subtitles. Image-guided multimodal machine translation is promising for error correction in speech recognition and for text disambiguation. In our situation, lecture videos provide a variety of images. Images of presentation materials can complement information not available from audio and may help improve translation quality. However, images of speakers or audiences would not directly affect the translation quality. We construct a multimodal parallel corpus with automatic speech recognition text and multiple images for a transcribed parallel corpus of lecture videos, and propose a method to select the most relevant ones from the multiple images with the speech text for improving the performance of image-guided multimodal machine translation. Experimental results on translating automatic speech recognition or transcribed English text into Japanese show the effectiveness of our method to select a relevant image.",
}
|
We work on a multimodal machine translation of the audio contained in English lecture videos to generate Japanese subtitles. Image-guided multimodal machine translation is promising for error correction in speech recognition and for text disambiguation. In our situation, lecture videos provide a variety of images. Images of presentation materials can complement information not available from audio and may help improve translation quality. However, images of speakers or audiences would not directly affect the translation quality. We construct a multimodal parallel corpus with automatic speech recognition text and multiple images for a transcribed parallel corpus of lecture videos, and propose a method to select the most relevant ones from the multiple images with the speech text for improving the performance of image-guided multimodal machine translation. Experimental results on translating automatic speech recognition or transcribed English text into Japanese show the effectiveness of our method to select a relevant image.
|
[
"Teramen, Ayu",
"Ohtsuka, Takumi",
"Kondo, Risa",
"Kajiwara, Tomoyuki",
"Ninomiya, Takashi"
] |
{E}nglish-to-{J}apanese Multimodal Machine Translation Based on Image-Text Matching of Lecture Videos
|
alvr-1.7
|
Poster
|
2006.12799v1
|
https://aclanthology.org/2024.alvr-1.8.bib
|
@inproceedings{wang-etal-2024-videocot,
title = "{V}ideo{C}o{T}: A Video Chain-of-Thought Dataset with Active Annotation Tool",
author = "Wang, Yan and
Zeng, Yawen and
Zheng, Jingsheng and
Xing, Xiaofen and
Xu, Jin and
Xu, Xiangmin",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.8",
pages = "92--101",
abstract = "Multimodal large language models (MLLMs) are flourishing, but mainly focus on images with less attention than videos, especially in sub-fields such as prompt engineering, video chain-of-though (CoT), and instruction tuning on videos. Therefore, we try to explore the collection of CoT datasets in videos to lead to video OpenQA and improve the reasoning ability of MLLMs. Unfortunately, making such video CoT datasets is not an easy task. Given that human annotation is too cumbersome and expensive, while machine-generated is not reliable due to the hallucination issue, we develop an automatic annotation tool that combines machine and human experts, under the active learning paradigm. Active learning is an interactive strategy between the model and human experts, in this way, the workload of human labeling can be reduced and the quality of the dataset can be guaranteed. With the help of the automatic annotation tool, we strive to contribute three datasets, namely VideoCoT, TopicQA, TopicCoT. Furthermore, we propose a simple but effective benchmark based on the collected datasets, which exploits CoT to maximize the complex reasoning capabilities of MLLMs. Extensive experiments demonstrate the effectiveness our solution, and we will release our source codes and datasets to facilitate the research community.",
}
|
Multimodal large language models (MLLMs) are flourishing, but mainly focus on images with less attention than videos, especially in sub-fields such as prompt engineering, video chain-of-though (CoT), and instruction tuning on videos. Therefore, we try to explore the collection of CoT datasets in videos to lead to video OpenQA and improve the reasoning ability of MLLMs. Unfortunately, making such video CoT datasets is not an easy task. Given that human annotation is too cumbersome and expensive, while machine-generated is not reliable due to the hallucination issue, we develop an automatic annotation tool that combines machine and human experts, under the active learning paradigm. Active learning is an interactive strategy between the model and human experts, in this way, the workload of human labeling can be reduced and the quality of the dataset can be guaranteed. With the help of the automatic annotation tool, we strive to contribute three datasets, namely VideoCoT, TopicQA, TopicCoT. Furthermore, we propose a simple but effective benchmark based on the collected datasets, which exploits CoT to maximize the complex reasoning capabilities of MLLMs. Extensive experiments demonstrate the effectiveness our solution, and we will release our source codes and datasets to facilitate the research community.
|
[
"Wang, Yan",
"Zeng, Yawen",
"Zheng, Jingsheng",
"Xing, Xiaofen",
"Xu, Jin",
"Xu, Xiangmin"
] |
{V}ideo{C}o{T}: A Video Chain-of-Thought Dataset with Active Annotation Tool
|
alvr-1.8
|
Poster
|
2407.05355v1
|
https://aclanthology.org/2024.alvr-1.9.bib
|
@inproceedings{rosch-etal-2024-enhancing,
title = "Enhancing Conceptual Understanding in Multimodal Contrastive Learning through Hard Negative Samples",
author = {R{\"o}sch, Philipp J. and
Oswald, Norbert and
Geierhos, Michaela and
Libovick{\'y}, Jind{\v{r}}ich},
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.9",
pages = "102--115",
abstract = "Current vision-language models leveraging contrastive learning often face limitations in developing fine-grained conceptual understanding. This is due to random negative samples during pretraining, causing almost exclusively very dissimilar concepts to be compared in the loss function. Consequently, the models struggle with fine-grained semantic differences. To address this problem, we introduce a novel pretraining method incorporating synthetic hard negative text examples. The hard negatives replace terms corresponding to visual concepts, leading to a more fine-grained visual and textual concept alignment. Further, we introduce InpaintCOCO, a new challenging dataset for assessing the fine-grained alignment of colors, objects, and sizes in vision-language models. We created the dataset using generative inpainting from COCO images by changing the visual concepts so that the images no longer match their original captions. Our results show significant improvements in fine-grained concept understanding across various vision-language datasets, including our InpaintCOCO dataset.",
}
|
Current vision-language models leveraging contrastive learning often face limitations in developing fine-grained conceptual understanding. This is due to random negative samples during pretraining, causing almost exclusively very dissimilar concepts to be compared in the loss function. Consequently, the models struggle with fine-grained semantic differences. To address this problem, we introduce a novel pretraining method incorporating synthetic hard negative text examples. The hard negatives replace terms corresponding to visual concepts, leading to a more fine-grained visual and textual concept alignment. Further, we introduce InpaintCOCO, a new challenging dataset for assessing the fine-grained alignment of colors, objects, and sizes in vision-language models. We created the dataset using generative inpainting from COCO images by changing the visual concepts so that the images no longer match their original captions. Our results show significant improvements in fine-grained concept understanding across various vision-language datasets, including our InpaintCOCO dataset.
|
[
"R{\\\"o}sch, Philipp J.",
"Oswald, Norbert",
"Geierhos, Michaela",
"Libovick{\\'y}, Jind{\\v{r}}ich"
] |
Enhancing Conceptual Understanding in Multimodal Contrastive Learning through Hard Negative Samples
|
alvr-1.9
|
Poster
|
2403.02875v2
|
https://aclanthology.org/2024.alvr-1.10.bib
|
@inproceedings{xia-etal-2024-vision,
title = "Vision Language Models for Spreadsheet Understanding: Challenges and Opportunities",
author = "Xia, Shiyu and
Xiong, Junyu and
Dong, Haoyu and
Zhao, Jianbo and
Tian, Yuzhang and
Zhou, Mengyu and
He, Yeye and
Han, Shi and
Zhang, Dongmei",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.10",
pages = "116--128",
abstract = "This paper explores capabilities of Vision Language Models on spreadsheet comprehension. We propose three self-supervised challenges with corresponding evaluation metrics to comprehensively evaluate VLMs on Optical Character Recognition (OCR), spatial perception, and visual format recognition. Additionally, we utilize the spreadsheet table detection task to assess the overall performance of VLMs by integrating these challenges. To probe VLMs more finely, we propose three spreadsheet-to-image settings: column width adjustment, style change, and address augmentation. We propose variants of prompts to address the above tasks in different settings. Notably, to leverage the strengths of VLMs in understanding text rather than two-dimensional positioning, we propose to decode cell values on the four boundaries of the table in spreadsheet boundary detection. Our findings reveal that VLMs demonstrate promising OCR capabilities but produce unsatisfactory results due to cell omission and misalignment, and they notably exhibit insufficient spatial and format recognition skills, motivating future work to enhance VLMs{'} spreadsheet data comprehension capabilities using our methods to generate extensive spreadsheet-image pairs in various settings.",
}
|
This paper explores capabilities of Vision Language Models on spreadsheet comprehension. We propose three self-supervised challenges with corresponding evaluation metrics to comprehensively evaluate VLMs on Optical Character Recognition (OCR), spatial perception, and visual format recognition. Additionally, we utilize the spreadsheet table detection task to assess the overall performance of VLMs by integrating these challenges. To probe VLMs more finely, we propose three spreadsheet-to-image settings: column width adjustment, style change, and address augmentation. We propose variants of prompts to address the above tasks in different settings. Notably, to leverage the strengths of VLMs in understanding text rather than two-dimensional positioning, we propose to decode cell values on the four boundaries of the table in spreadsheet boundary detection. Our findings reveal that VLMs demonstrate promising OCR capabilities but produce unsatisfactory results due to cell omission and misalignment, and they notably exhibit insufficient spatial and format recognition skills, motivating future work to enhance VLMs{'} spreadsheet data comprehension capabilities using our methods to generate extensive spreadsheet-image pairs in various settings.
|
[
"Xia, Shiyu",
"Xiong, Junyu",
"Dong, Haoyu",
"Zhao, Jianbo",
"Tian, Yuzhang",
"Zhou, Mengyu",
"He, Yeye",
"Han, Shi",
"Zhang, Dongmei"
] |
Vision Language Models for Spreadsheet Understanding: Challenges and Opportunities
|
alvr-1.10
|
Poster
|
2405.16234v2
|
https://aclanthology.org/2024.alvr-1.11.bib
|
@inproceedings{wang-etal-2024-slideavsr,
title = "{S}lide{AVSR}: A Dataset of Paper Explanation Videos for Audio-Visual Speech Recognition",
author = "Wang, Hao and
Kurita, Shuhei and
Shimizu, Shuichiro and
Kawahara, Daisuke",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.11",
pages = "129--137",
abstract = "Audio-visual speech recognition (AVSR) is a multimodal extension of automatic speech recognition (ASR), using video as a complement to audio. In AVSR, considerable efforts have been directed at datasets for facial features such as lip-readings, while they often fall short in evaluating the image comprehension capabilities in broader contexts. In this paper, we construct SlideAVSR, an AVSR dataset using scientific paper explanation videos. SlideAVSR provides a new benchmark where models transcribe speech utterances with texts on the slides on the presentation recordings. As technical terminologies that are frequent in paper explanations are notoriously challenging to transcribe without reference texts, our SlideAVSR dataset spotlights a new aspect of AVSR problems. As a simple yet effective baseline, we propose DocWhisper, an AVSR model that can refer to textual information from slides, and confirm its effectiveness on SlideAVSR.",
}
|
Audio-visual speech recognition (AVSR) is a multimodal extension of automatic speech recognition (ASR), using video as a complement to audio. In AVSR, considerable efforts have been directed at datasets for facial features such as lip-readings, while they often fall short in evaluating the image comprehension capabilities in broader contexts. In this paper, we construct SlideAVSR, an AVSR dataset using scientific paper explanation videos. SlideAVSR provides a new benchmark where models transcribe speech utterances with texts on the slides on the presentation recordings. As technical terminologies that are frequent in paper explanations are notoriously challenging to transcribe without reference texts, our SlideAVSR dataset spotlights a new aspect of AVSR problems. As a simple yet effective baseline, we propose DocWhisper, an AVSR model that can refer to textual information from slides, and confirm its effectiveness on SlideAVSR.
|
[
"Wang, Hao",
"Kurita, Shuhei",
"Shimizu, Shuichiro",
"Kawahara, Daisuke"
] |
{S}lide{AVSR}: A Dataset of Paper Explanation Videos for Audio-Visual Speech Recognition
|
alvr-1.11
|
Poster
|
2401.09759v2
|
https://aclanthology.org/2024.alvr-1.12.bib
|
@inproceedings{hu-keller-2024-causal,
title = "Causal and Temporal Inference in Visual Question Generation by Utilizing Pre-trained Models",
author = "Hu, Zhanghao and
Keller, Frank",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.12",
pages = "138--154",
abstract = "Visual Question Generation is a task at the crossroads of visual and language learning, impacting broad domains like education, medicine, and social media. While existing pre-trained models excel in fact-based queries with image pairs, they fall short of capturing human-like inference, particularly in understanding causal and temporal relationships within videos. Additionally, the computational demands of prevalent pre-training methods pose challenges. In response, our study introduces a framework that leverages vision-text matching pre-trained models to guide language models in recognizing event-entity relationships within videos and generating inferential questions. Demonstrating efficacy on the NExT-QA dataset, which is designed for causal and temporal inference in visual question answering, our method successfully guides pre-trained language models in recognizing video content. We present methodologies for abstracting causal and temporal relationships between events and entities, pointing out the importance of consistent relationships among input frames during training and inference phases and suggesting an avenue for future exploration.",
}
|
Visual Question Generation is a task at the crossroads of visual and language learning, impacting broad domains like education, medicine, and social media. While existing pre-trained models excel in fact-based queries with image pairs, they fall short of capturing human-like inference, particularly in understanding causal and temporal relationships within videos. Additionally, the computational demands of prevalent pre-training methods pose challenges. In response, our study introduces a framework that leverages vision-text matching pre-trained models to guide language models in recognizing event-entity relationships within videos and generating inferential questions. Demonstrating efficacy on the NExT-QA dataset, which is designed for causal and temporal inference in visual question answering, our method successfully guides pre-trained language models in recognizing video content. We present methodologies for abstracting causal and temporal relationships between events and entities, pointing out the importance of consistent relationships among input frames during training and inference phases and suggesting an avenue for future exploration.
|
[
"Hu, Zhanghao",
"Keller, Frank"
] |
Causal and Temporal Inference in Visual Question Generation by Utilizing Pre-trained Models
|
alvr-1.12
|
Poster
|
2304.08083v2
|
https://aclanthology.org/2024.alvr-1.13.bib
|
@inproceedings{reinhardt-etal-2024-improving,
title = "Improving Vision-Language Cross-Lingual Transfer with Scheduled Unfreezing",
author = "Reinhardt, Max and
Geigle, Gregor and
Timofte, Radu and
Glava{\v{s}}, Goran",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.13",
pages = "155--166",
abstract = "Large-scale pretraining of vision-language (VL) models brought dramatic improvements across numerous tasks, from visual question-answering to cross-modal retrieval but these gains are mostly limited to English. Massively multilingual VL encoder models (mVLMs) hold promise for other languages: after fine-tuning on only English task data, they can perform the task in other languages in what is termed zero-shot cross-lingual transfer (ZS-XLT). Still, ZS-XLT sees a large performance gap to English, especially for low-resource languages. In this work, we reduce this gap with a fine-tuning strategy known as \textit{Scheduled Unfreezing} (SUF): instead of updating all parameters from the start, we begin with the top layer(s) of the vision-language encoder and gradually unfreeze (i.e., update) its layers top to bottom. SUF forces reliance on encoder{'}s representations from higher layers: the fact that in multilingual models these representations encode higher-level semantics rather than low-level language-specific idiosyncrasies, we hypothesize, should render SUF beneficial for ZS-XLT. Experiments with two mVLMs (UC2 {\&} CCLM) on three downstream tasks (xGQA, XVNLI, xFlickrCo) show that SUF brings consistent gains in ZS-XLT, especially for visual Q{\&}A (xGQA) by up to 10 points.",
}
|
Large-scale pretraining of vision-language (VL) models brought dramatic improvements across numerous tasks, from visual question-answering to cross-modal retrieval but these gains are mostly limited to English. Massively multilingual VL encoder models (mVLMs) hold promise for other languages: after fine-tuning on only English task data, they can perform the task in other languages in what is termed zero-shot cross-lingual transfer (ZS-XLT). Still, ZS-XLT sees a large performance gap to English, especially for low-resource languages. In this work, we reduce this gap with a fine-tuning strategy known as \textit{Scheduled Unfreezing} (SUF): instead of updating all parameters from the start, we begin with the top layer(s) of the vision-language encoder and gradually unfreeze (i.e., update) its layers top to bottom. SUF forces reliance on encoder{'}s representations from higher layers: the fact that in multilingual models these representations encode higher-level semantics rather than low-level language-specific idiosyncrasies, we hypothesize, should render SUF beneficial for ZS-XLT. Experiments with two mVLMs (UC2 {\&} CCLM) on three downstream tasks (xGQA, XVNLI, xFlickrCo) show that SUF brings consistent gains in ZS-XLT, especially for visual Q{\&}A (xGQA) by up to 10 points.
|
[
"Reinhardt, Max",
"Geigle, Gregor",
"Timofte, Radu",
"Glava{\\v{s}}, Goran"
] |
Improving Vision-Language Cross-Lingual Transfer with Scheduled Unfreezing
|
alvr-1.13
|
Poster
|
2301.05487v2
|
https://aclanthology.org/2024.alvr-1.14.bib
|
@inproceedings{zhu-etal-2024-automatic,
title = "Automatic Layout Planning for Visually-Rich Documents with Instruction-Following Models",
author = "Zhu, Wanrong and
Zhang, Ruiyi and
Healey, Jennifer and
Wang, William Yang and
Sun, Tong",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.14",
pages = "167--172",
abstract = "Recent advancements in instruction-following models have made user interactions with models more user-friendly and efficient, broadening their applicability. In graphic design, non-professional users often struggle to create visually appealing layouts due to limited skills and resources. In this work, we introduce a novel multimodal instruction-following framework for layout planning, allowing users to easily arrange visual elements into tailored layouts by specifying canvas size and design purpose, such as for book covers, posters, brochures, or menus. We developed three layout reasoning tasks to train the model in understanding and executing layout instructions. Experiments on two benchmarks show that our method not only simplifies the design process for non-professionals but also surpasses the performance of few-shot GPT-4V models, with mIoU higher by 12{\%} on Crello. This progress highlights the potential of multimodal instruction-following models to automate and simplify the design process, providing an approachable solution for a wide range of design tasks on visually-rich documents.",
}
|
Recent advancements in instruction-following models have made user interactions with models more user-friendly and efficient, broadening their applicability. In graphic design, non-professional users often struggle to create visually appealing layouts due to limited skills and resources. In this work, we introduce a novel multimodal instruction-following framework for layout planning, allowing users to easily arrange visual elements into tailored layouts by specifying canvas size and design purpose, such as for book covers, posters, brochures, or menus. We developed three layout reasoning tasks to train the model in understanding and executing layout instructions. Experiments on two benchmarks show that our method not only simplifies the design process for non-professionals but also surpasses the performance of few-shot GPT-4V models, with mIoU higher by 12{\%} on Crello. This progress highlights the potential of multimodal instruction-following models to automate and simplify the design process, providing an approachable solution for a wide range of design tasks on visually-rich documents.
|
[
"Zhu, Wanrong",
"Zhang, Ruiyi",
"Healey, Jennifer",
"Wang, William Yang",
"Sun, Tong"
] |
Automatic Layout Planning for Visually-Rich Documents with Instruction-Following Models
|
alvr-1.14
|
Poster
|
2404.15271v1
|
https://aclanthology.org/2024.alvr-1.15.bib
|
@inproceedings{urailertprasert-etal-2024-sea,
title = "{SEA}-{VQA}: {S}outheast {A}sian Cultural Context Dataset For Visual Question Answering",
author = "Urailertprasert, Norawit and
Limkonchotiwat, Peerat and
Suwajanakorn, Supasorn and
Nutanong, Sarana",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.15",
pages = "173--185",
abstract = "Visual Question Answering (VQA) is a critical task that requires the simultaneous understanding of visual and textual information. While significant advancements have been made with multilingual datasets, these often lack cultural specificity, especially in the context of Southeast Asia (SEA). In this paper, we introduce SEA-VQA aiming to highlight the challenges and gaps in existing VQA models when confronted with culturally specific content. Our dataset includes images from eight SEA countries, curated from the UNESCO Cultural Heritage collection. Our evaluation, comparing GPT-4 and GEMINI models, demonstrates substantial performance drops on culture-centric questions compared to the A-OKVQA dataset, a commonsense and world-knowledge VQA benchmark comprising approximately 25,000 questions. Our findings underscore the importance of cultural diversity in VQA datasets and reveal substantial gaps in the ability of current VQA models to handle culturally rich contexts. SEA-VQA serves as a crucial benchmark for identifying these gaps and guiding future improvements in VQA systems.",
}
|
Visual Question Answering (VQA) is a critical task that requires the simultaneous understanding of visual and textual information. While significant advancements have been made with multilingual datasets, these often lack cultural specificity, especially in the context of Southeast Asia (SEA). In this paper, we introduce SEA-VQA aiming to highlight the challenges and gaps in existing VQA models when confronted with culturally specific content. Our dataset includes images from eight SEA countries, curated from the UNESCO Cultural Heritage collection. Our evaluation, comparing GPT-4 and GEMINI models, demonstrates substantial performance drops on culture-centric questions compared to the A-OKVQA dataset, a commonsense and world-knowledge VQA benchmark comprising approximately 25,000 questions. Our findings underscore the importance of cultural diversity in VQA datasets and reveal substantial gaps in the ability of current VQA models to handle culturally rich contexts. SEA-VQA serves as a crucial benchmark for identifying these gaps and guiding future improvements in VQA systems.
|
[
"Urailertprasert, Norawit",
"Limkonchotiwat, Peerat",
"Suwajanakorn, Supasorn",
"Nutanong, Sarana"
] |
{SEA}-{VQA}: {S}outheast {A}sian Cultural Context Dataset For Visual Question Answering
|
alvr-1.15
|
Poster
|
2402.05374v2
|
https://aclanthology.org/2024.alvr-1.16.bib
|
@inproceedings{bielefeld-etal-2024-wiki,
title = "{W}iki-{VEL}: Visual Entity Linking for Structured Data on Wikimedia Commons",
author = {Bielefeld, Philipp and
Geppert, Jasmin and
G{\"u}ven, Necdet and
John, Melna and
Ziupka, Adrian and
Kaffee, Lucie-Aim{\'e}e and
Biswas, Russa and
De Melo, Gerard},
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.16",
pages = "186--194",
abstract = "Describing Wikimedia Commons images using Wikidata{'}s structured data enables a wide range of automation tasks, such as search and organization, as well as downstream tasks, such as labeling images or training machine learning models. However, there is currently a lack of structured data-labelled images on Wikimedia Commons.To close this gap, we propose the task of \textit{Visual Entity Linking (VEL) for Wikimedia Commons}, in which we create new labels for Wikimedia Commons images from Wikidata items. VEL is a crucial tool for improving information retrieval, search, content understanding, cross-modal applications, and various machine-learning tasks. In this paper, we propose a method to create new labels for Wikimedia Commons images from Wikidata items. To this end, we create a novel dataset leveraging community-created structured data on Wikimedia Commons and fine-tuning pre-trained models based on the CLIP architecture. Although the best-performing models show promising results, the study also identifies key challenges of the data and the task.",
}
|
Describing Wikimedia Commons images using Wikidata{'}s structured data enables a wide range of automation tasks, such as search and organization, as well as downstream tasks, such as labeling images or training machine learning models. However, there is currently a lack of structured data-labelled images on Wikimedia Commons.To close this gap, we propose the task of \textit{Visual Entity Linking (VEL) for Wikimedia Commons}, in which we create new labels for Wikimedia Commons images from Wikidata items. VEL is a crucial tool for improving information retrieval, search, content understanding, cross-modal applications, and various machine-learning tasks. In this paper, we propose a method to create new labels for Wikimedia Commons images from Wikidata items. To this end, we create a novel dataset leveraging community-created structured data on Wikimedia Commons and fine-tuning pre-trained models based on the CLIP architecture. Although the best-performing models show promising results, the study also identifies key challenges of the data and the task.
|
[
"Bielefeld, Philipp",
"Geppert, Jasmin",
"G{\\\"u}ven, Necdet",
"John, Melna",
"Ziupka, Adrian",
"Kaffee, Lucie-Aim{\\'e}e",
"Biswas, Russa",
"De Melo, Gerard"
] |
{W}iki-{VEL}: Visual Entity Linking for Structured Data on Wikimedia Commons
|
alvr-1.16
|
Poster
|
1507.04180v1
|
https://aclanthology.org/2024.alvr-1.17.bib
|
@inproceedings{wazni-etal-2024-verbclip,
title = "{V}erb{CLIP}: Improving Verb Understanding in Vision-Language Models with Compositional Structures",
author = "Wazni, Hadi and
Lo, Kin and
Sadrzadeh, Mehrnoosh",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.17",
pages = "195--201",
abstract = "Verbs describe the dynamics of interactions between people, objects, and their environments. They play a crucial role in language formation and understanding. Nonetheless, recent vision-language models like CLIP predominantly rely on nouns and have a limited account of verbs. This limitation affects their performance in tasks requiring action recognition and scene understanding. In this work, we introduce VerbCLIP, a verb-centric vision-language model which learns meanings of verbs based on a compositional approach to statistical machine learning. Our methods significantly outperform CLIP in zero-shot performance on the VALSE, VL-Checklist, and SVO-Probes datasets, with improvements of +2.38{\%}, +3.14{\%}, and +1.47{\%}, without fine-tuning. Fine-tuning resulted in further improvements, with gains of +2.85{\%} and +9.2{\%} on the VALSE and VL-Checklist datasets.",
}
|
Verbs describe the dynamics of interactions between people, objects, and their environments. They play a crucial role in language formation and understanding. Nonetheless, recent vision-language models like CLIP predominantly rely on nouns and have a limited account of verbs. This limitation affects their performance in tasks requiring action recognition and scene understanding. In this work, we introduce VerbCLIP, a verb-centric vision-language model which learns meanings of verbs based on a compositional approach to statistical machine learning. Our methods significantly outperform CLIP in zero-shot performance on the VALSE, VL-Checklist, and SVO-Probes datasets, with improvements of +2.38{\%}, +3.14{\%}, and +1.47{\%}, without fine-tuning. Fine-tuning resulted in further improvements, with gains of +2.85{\%} and +9.2{\%} on the VALSE and VL-Checklist datasets.
|
[
"Wazni, Hadi",
"Lo, Kin",
"Sadrzadeh, Mehrnoosh"
] |
{V}erb{CLIP}: Improving Verb Understanding in Vision-Language Models with Compositional Structures
|
alvr-1.17
|
Poster
|
2304.06708v1
|
https://aclanthology.org/2024.alvr-1.18.bib
|
@inproceedings{narin-2024-evolutionary,
title = "Evolutionary Reward Design and Optimization with Multimodal Large Language Models",
author = "Narin, Ali",
editor = "Gu, Jing and
Fu, Tsu-Jui (Ray) and
Hudson, Drew and
Celikyilmaz, Asli and
Wang, William",
booktitle = "Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.alvr-1.18",
pages = "202--208",
abstract = "Designing reward functions is a pivotal yet challenging task for Reinforcement Learning (RL) practices, often demanding domain expertise and substantial effort. Recent studies have explored the utilization of Large Language Models (LLMs) to generate reward functions via evolutionary search techniques. However, these approaches overlook the potential of multimodal information, such as images and videos. In particular, prior methods predominantly rely on numerical feedback from the RL environment for doing evolution, neglecting the incorporation of visual data obtained during training. This study introduces a novel approach by employing Multimodal Large Language Models (MLLMs) to craft reward functions tailored for various RL tasks. The methodology involves providing MLLM with the RL environment{'}s code alongside its image as context and task information to generate reward candidates. Then, the chosen agent undergoes training, and the numerical feedback from the environment, along with the recorded video of the top-performing policy, is provided as feedback to the MLLM. By employing an iterative feedback mechanism through evolutionary search, MLLM consistently refines the reward function to maximize accuracy. Testing on two different agents points to the preeminence of our approach over previous methodology, which themselves outperformed 83{\%} of reward functions designed by human experts.",
}
|
Designing reward functions is a pivotal yet challenging task for Reinforcement Learning (RL) practices, often demanding domain expertise and substantial effort. Recent studies have explored the utilization of Large Language Models (LLMs) to generate reward functions via evolutionary search techniques. However, these approaches overlook the potential of multimodal information, such as images and videos. In particular, prior methods predominantly rely on numerical feedback from the RL environment for doing evolution, neglecting the incorporation of visual data obtained during training. This study introduces a novel approach by employing Multimodal Large Language Models (MLLMs) to craft reward functions tailored for various RL tasks. The methodology involves providing MLLM with the RL environment{'}s code alongside its image as context and task information to generate reward candidates. Then, the chosen agent undergoes training, and the numerical feedback from the environment, along with the recorded video of the top-performing policy, is provided as feedback to the MLLM. By employing an iterative feedback mechanism through evolutionary search, MLLM consistently refines the reward function to maximize accuracy. Testing on two different agents points to the preeminence of our approach over previous methodology, which themselves outperformed 83{\%} of reward functions designed by human experts.
|
[
"Narin, Ali"
] |
Evolutionary Reward Design and Optimization with Multimodal Large Language Models
|
alvr-1.18
|
Poster
|
2406.10540v1
|
https://aclanthology.org/2024.arabicnlp-1.1.bib
|
@inproceedings{abdaljalil-mubarak-2024-wikidata,
title = "{W}ikidata as a Source of Demographic Information",
author = "Abdaljalil, Samir and
Mubarak, Hamdy",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.1",
pages = "1--10",
abstract = "Names carry important information about our identities and demographics such as gender, nationality, ethnicity, etc. We investigate the use of individual{'}s name, in both Arabic and English, to predict important attributes, namely country, region, gender, and language. We extract data from Wikidata, and normalize it, to build a comprehensive dataset consisting of more than 1 million entities and their normalized attributes. We experiment with a Linear SVM approach, as well as two Transformers approaches consisting of BERT model fine-tuning and Transformers pipeline. Our results indicate that we can predict the gender, language and region using the name only with a confidence over 0.65. The country attribute can be predicted with less accuracy. The Linear SVM approach outperforms the other approaches for all the attributes. The best performing approach was also evaluated on another dataset that consists of 1,500 names from 15 countries (covering different regions) extracted from Twitter, and yields similar results.",
}
|
Names carry important information about our identities and demographics such as gender, nationality, ethnicity, etc. We investigate the use of individual{'}s name, in both Arabic and English, to predict important attributes, namely country, region, gender, and language. We extract data from Wikidata, and normalize it, to build a comprehensive dataset consisting of more than 1 million entities and their normalized attributes. We experiment with a Linear SVM approach, as well as two Transformers approaches consisting of BERT model fine-tuning and Transformers pipeline. Our results indicate that we can predict the gender, language and region using the name only with a confidence over 0.65. The country attribute can be predicted with less accuracy. The Linear SVM approach outperforms the other approaches for all the attributes. The best performing approach was also evaluated on another dataset that consists of 1,500 names from 15 countries (covering different regions) extracted from Twitter, and yields similar results.
|
[
"Abdaljalil, Samir",
"Mubarak, Hamdy"
] |
{W}ikidata as a Source of Demographic Information
|
arabicnlp-1.1
|
Poster
|
1908.11153v2
|
https://aclanthology.org/2024.arabicnlp-1.2.bib
|
@inproceedings{almutairi-etal-2024-synthetic,
title = "Synthetic {A}rabic Medical Dialogues Using Advanced Multi-Agent {LLM} Techniques",
author = "ALMutairi, Mariam and
AlKulaib, Lulwah and
Aktas, Melike and
Alsalamah, Sara and
Lu, Chang-Tien",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.2",
pages = "11--26",
abstract = "The increasing use of artificial intelligence in healthcare requires robust datasets for training and validation, particularly in the domain of medical conversations. However, the creation and accessibility of such datasets in Arabic face significant challenges, especially due to the sensitivity and privacy concerns that are associated with medical conversations. These conversations are rarely recorded or preserved, making the availability of comprehensive Arabic medical dialogue datasets scarce. This limitation slows down not only the development of effective natural language processing models but also restricts the opportunity for open comparison of algorithms and their outcomes. Recent advancements in large language models (LLMs) like ChatGPT, GPT-4, Gemini-pro, and Claude-3 show promising capabilities in generating synthetic data. To address this gap, we introduce a novel Multi-Agent LLM approach capable of generating synthetic Arabic medical dialogues from patient notes, regardless of the original language. This development presents a significant step towards overcoming the barriers in dataset availability, enhancing the potential for broader research and application in AI-driven medical dialogue systems.",
}
|
The increasing use of artificial intelligence in healthcare requires robust datasets for training and validation, particularly in the domain of medical conversations. However, the creation and accessibility of such datasets in Arabic face significant challenges, especially due to the sensitivity and privacy concerns that are associated with medical conversations. These conversations are rarely recorded or preserved, making the availability of comprehensive Arabic medical dialogue datasets scarce. This limitation slows down not only the development of effective natural language processing models but also restricts the opportunity for open comparison of algorithms and their outcomes. Recent advancements in large language models (LLMs) like ChatGPT, GPT-4, Gemini-pro, and Claude-3 show promising capabilities in generating synthetic data. To address this gap, we introduce a novel Multi-Agent LLM approach capable of generating synthetic Arabic medical dialogues from patient notes, regardless of the original language. This development presents a significant step towards overcoming the barriers in dataset availability, enhancing the potential for broader research and application in AI-driven medical dialogue systems.
|
[
"ALMutairi, Mariam",
"AlKulaib, Lulwah",
"Aktas, Melike",
"Alsalamah, Sara",
"Lu, Chang-Tien"
] |
Synthetic {A}rabic Medical Dialogues Using Advanced Multi-Agent {LLM} Techniques
|
arabicnlp-1.2
|
Poster
|
2403.06611v1
|
https://aclanthology.org/2024.arabicnlp-1.3.bib
|
@inproceedings{haouari-etal-2024-aured,
title = "{A}u{RED}: Enabling {A}rabic Rumor Verification using Evidence from Authorities over {T}witter",
author = "Haouari, Fatima and
Elsayed, Tamer and
Suwaileh, Reem",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.3",
pages = "27--41",
abstract = "Diverging from the trend of the previous rumor verification studies, we introduce the new task of rumor verification using evidence that are exclusively captured from authorities, i.e., entities holding the right and knowledge to verify corresponding information. To enable research on this task for Arabic low-resourced language, we construct and release the first Authority-Rumor-Evidence Dataset (AuRED). The dataset comprises 160 rumors expressed in tweets and 692 Twitter timelines of authorities containing about 34k tweets. Additionally, we explore how existing evidence retrieval and claim verification models for fact-checking perform on our task under both the cross-lingual zero-shot and in-domain fine-tuning setups. Our experiments show that although evidence retrieval models perform relatively well on the task establishing strong baselines, there is still a big room for improvement. However, existing claim verification models perform poorly on the task no matter how good the retrieval performance is. The results also show that stance detection can be useful for evidence retrieval. Moreover, existing fact-checking datasets showed a potential in transfer learning to our task, however, further investigation using different datasets and setups is required.",
}
|
Diverging from the trend of the previous rumor verification studies, we introduce the new task of rumor verification using evidence that are exclusively captured from authorities, i.e., entities holding the right and knowledge to verify corresponding information. To enable research on this task for Arabic low-resourced language, we construct and release the first Authority-Rumor-Evidence Dataset (AuRED). The dataset comprises 160 rumors expressed in tweets and 692 Twitter timelines of authorities containing about 34k tweets. Additionally, we explore how existing evidence retrieval and claim verification models for fact-checking perform on our task under both the cross-lingual zero-shot and in-domain fine-tuning setups. Our experiments show that although evidence retrieval models perform relatively well on the task establishing strong baselines, there is still a big room for improvement. However, existing claim verification models perform poorly on the task no matter how good the retrieval performance is. The results also show that stance detection can be useful for evidence retrieval. Moreover, existing fact-checking datasets showed a potential in transfer learning to our task, however, further investigation using different datasets and setups is required.
|
[
"Haouari, Fatima",
"Elsayed, Tamer",
"Suwaileh, Reem"
] |
{A}u{RED}: Enabling {A}rabic Rumor Verification using Evidence from Authorities over {T}witter
|
arabicnlp-1.3
|
Poster
|
2301.05863v1
|
https://aclanthology.org/2024.arabicnlp-1.4.bib
|
@inproceedings{alhafni-etal-2024-exploiting,
title = "Exploiting Dialect Identification in Automatic Dialectal Text Normalization",
author = "Alhafni, Bashar and
Al-Towaity, Sarah and
Fawzy, Ziyad and
Nassar, Fatema and
Eryani, Fadhl and
Bouamor, Houda and
Habash, Nizar",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.4",
pages = "42--54",
abstract = "Dialectal Arabic is the primary spoken language used by native Arabic speakers in daily communication. The rise of social media platforms has notably expanded its use as a written language. However, Arabic dialects do not have standard orthographies. This, combined with the inherent noise in user-generated content on social media, presents a major challenge to NLP applications dealing with Dialectal Arabic. In this paper, we explore and report on the task of CODAfication, which aims to normalize Dialectal Arabic into the Conventional Orthography for Dialectal Arabic (CODA). We work with a unique parallel corpus of multiple Arabic dialects focusing on five major city dialects. We benchmark newly developed pretrained sequence-to-sequence models on the task of CODAfication. We further show that using dialect identification information improves the performance across all dialects. We make our code, data, andpretrained models publicly available.",
}
|
Dialectal Arabic is the primary spoken language used by native Arabic speakers in daily communication. The rise of social media platforms has notably expanded its use as a written language. However, Arabic dialects do not have standard orthographies. This, combined with the inherent noise in user-generated content on social media, presents a major challenge to NLP applications dealing with Dialectal Arabic. In this paper, we explore and report on the task of CODAfication, which aims to normalize Dialectal Arabic into the Conventional Orthography for Dialectal Arabic (CODA). We work with a unique parallel corpus of multiple Arabic dialects focusing on five major city dialects. We benchmark newly developed pretrained sequence-to-sequence models on the task of CODAfication. We further show that using dialect identification information improves the performance across all dialects. We make our code, data, andpretrained models publicly available.
|
[
"Alhafni, Bashar",
"Al-Towaity, Sarah",
"Fawzy, Ziyad",
"Nassar, Fatema",
"Eryani, Fadhl",
"Bouamor, Houda",
"Habash, Nizar"
] |
Exploiting Dialect Identification in Automatic Dialectal Text Normalization
|
arabicnlp-1.4
|
Poster
|
2407.03020v1
|
https://aclanthology.org/2024.arabicnlp-1.5.bib
|
@inproceedings{liberato-etal-2024-strategies,
title = "Strategies for {A}rabic Readability Modeling",
author = "Liberato, Juan and
Alhafni, Bashar and
Khalil, Muhamed and
Habash, Nizar",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.5",
pages = "55--66",
abstract = "Automatic readability assessment is relevant to building NLP applications for education, content analysis, and accessibility. However, Arabic readability assessment is a challenging task due to Arabic{'}s morphological richness and limited readability resources. In this paper, we present a set of experimental results on Arabic readability assessment using a diverse range of approaches, from rule-based methods to Arabic pretrained language models. We report our results on a newly created corpus at different textual granularity levels (words and sentence fragments). Our results show that combining different techniques yields the best results, achieving an overall macro F1 score of 86.7 at the word level and 87.9 at the fragment level on a blind test set. We make our code, data, and pretrained models publicly available.",
}
|
Automatic readability assessment is relevant to building NLP applications for education, content analysis, and accessibility. However, Arabic readability assessment is a challenging task due to Arabic{'}s morphological richness and limited readability resources. In this paper, we present a set of experimental results on Arabic readability assessment using a diverse range of approaches, from rule-based methods to Arabic pretrained language models. We report our results on a newly created corpus at different textual granularity levels (words and sentence fragments). Our results show that combining different techniques yields the best results, achieving an overall macro F1 score of 86.7 at the word level and 87.9 at the fragment level on a blind test set. We make our code, data, and pretrained models publicly available.
|
[
"Liberato, Juan",
"Alhafni, Bashar",
"Khalil, Muhamed",
"Habash, Nizar"
] |
Strategies for {A}rabic Readability Modeling
|
arabicnlp-1.5
|
Poster
|
2407.03032v1
|
https://aclanthology.org/2024.arabicnlp-1.6.bib
|
@inproceedings{mraikhat-etal-2024-areej,
title = "{AREE}j: {A}rabic Relation Extraction with Evidence",
author = "Mraikhat, Osama and
Hamoud, Hadi and
Zaraket, Fadi",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.6",
pages = "67--72",
abstract = "Relational entity extraction is key in building knowledge graphs. A relational entity has a source, a tail and atype. In this paper, we consider Arabic text and introduce evidence enrichment which intuitivelyinforms models for better predictions. Relational evidence is an expression in the textthat explains how sources and targets relate. {\%}It also provides hints from which models learn. This paper augments the existing relational extraction dataset with evidence annotation to its 2.9-million Arabic relations.We leverage the augmented dataset to build , a relation extraction with evidence model from Arabic documents. The evidence augmentation model we constructed to complete the dataset achieved .82 F1-score (.93 precision, .73 recall). The target outperformed SOTA mREBEL with .72 F1-score (.78 precision, .66 recall).",
}
|
Relational entity extraction is key in building knowledge graphs. A relational entity has a source, a tail and atype. In this paper, we consider Arabic text and introduce evidence enrichment which intuitivelyinforms models for better predictions. Relational evidence is an expression in the textthat explains how sources and targets relate. {\%}It also provides hints from which models learn. This paper augments the existing relational extraction dataset with evidence annotation to its 2.9-million Arabic relations.We leverage the augmented dataset to build , a relation extraction with evidence model from Arabic documents. The evidence augmentation model we constructed to complete the dataset achieved .82 F1-score (.93 precision, .73 recall). The target outperformed SOTA mREBEL with .72 F1-score (.78 precision, .66 recall).
|
[
"Mraikhat, Osama",
"Hamoud, Hadi",
"Zaraket, Fadi"
] |
{AREE}j: {A}rabic Relation Extraction with Evidence
|
arabicnlp-1.6
|
Poster
|
2106.08657v2
|
https://aclanthology.org/2024.arabicnlp-1.7.bib
|
@inproceedings{boughorbel-etal-2024-improving,
title = "Improving Language Models Trained on Translated Data with Continual Pre-Training and Dictionary Learning Analysis",
author = "Boughorbel, Sabri and
Parvez, Md Rizwan and
Hawasly, Majd",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.7",
pages = "73--88",
abstract = "Training LLMs in low resources languages usually utilizes machine translation (MT) data augmentation from English language. However, translation brings a number of challenges: there are large costs attached to translating and curating huge amounts of content with high-end machine translation solutions; the translated content carries over cultural biases; and if the translation is not faithful and accurate, the quality of the data degrades causing issues in the trained model. In this work, we investigate the role of translation and synthetic data in training language models. We translate TinyStories, a dataset of 2.2M short stories for 3-4 year old children, from English to Arabic using the open NLLB-3B MT model. We train a number of story generation models of size 1M-33M parameters using this data. We identify a number of quality and task-specific issues in the resulting models. To rectify these issues, we further pre-train the models with a small dataset of synthesized high-quality stories generated by a capable LLM in Arabic, representing 1{\%} of the original training data. We show, using GPT-4 as a judge and dictionary learning analysis from mechanistic interpretability, that the suggested approach is a practical means to resolve some of the translation pitfalls. We illustrate the improvement through case studies of linguistic and cultural bias issues.",
}
|
Training LLMs in low resources languages usually utilizes machine translation (MT) data augmentation from English language. However, translation brings a number of challenges: there are large costs attached to translating and curating huge amounts of content with high-end machine translation solutions; the translated content carries over cultural biases; and if the translation is not faithful and accurate, the quality of the data degrades causing issues in the trained model. In this work, we investigate the role of translation and synthetic data in training language models. We translate TinyStories, a dataset of 2.2M short stories for 3-4 year old children, from English to Arabic using the open NLLB-3B MT model. We train a number of story generation models of size 1M-33M parameters using this data. We identify a number of quality and task-specific issues in the resulting models. To rectify these issues, we further pre-train the models with a small dataset of synthesized high-quality stories generated by a capable LLM in Arabic, representing 1{\%} of the original training data. We show, using GPT-4 as a judge and dictionary learning analysis from mechanistic interpretability, that the suggested approach is a practical means to resolve some of the translation pitfalls. We illustrate the improvement through case studies of linguistic and cultural bias issues.
|
[
"Boughorbel, Sabri",
"Parvez, Md Rizwan",
"Hawasly, Majd"
] |
Improving Language Models Trained on Translated Data with Continual Pre-Training and Dictionary Learning Analysis
|
arabicnlp-1.7
|
Poster
|
2405.14277v2
|
https://aclanthology.org/2024.arabicnlp-1.8.bib
|
@inproceedings{elnokrashy-alkhamissi-2024-context,
title = "A Context-Contrastive Inference Approach To Partial Diacritization",
author = "ElNokrashy, Muhammad and
AlKhamissi, Badr",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.8",
pages = "89--101",
abstract = "Diacritization plays a pivotal role for meaning disambiguation and improving readability in Arabic texts. Efforts have long focused on marking every eligible character (Full Diacritization). Overlooked in comparison, Partial Diacritzation ({`}PD{`}) is the selection of a subset of characters to be annotated to aid comprehension only where needed. Research has indicated that excessive diacritic marks can hinder skilled readers{---}reducing reading speed and accuracy. We conduct a behavioral experiment and show that partially marked text is often easier to read than fully marked text, and sometimes easier than plain text. In this light, we introduce Context-Contrastive Partial Diacritization ({`}CCPD{`}){---}a novel approach to {`}PD{`} which integrates seamlessly with existing Arabic diacritization systems. {`}CCPD{`} processes each word twice, once with context and once without, and diacritizes only the characters with disparities between the two inferences. Further, we introduce novel indicators for measuring partial diacritization quality to help establish this as a machine learning task. Lastly, we introduce {`}TD2{`}, a Transformer-variant of an established model which offers a markedly different performance profile on our proposed indicators compared to all other known systems.",
}
|
Diacritization plays a pivotal role for meaning disambiguation and improving readability in Arabic texts. Efforts have long focused on marking every eligible character (Full Diacritization). Overlooked in comparison, Partial Diacritzation ({`}PD{`}) is the selection of a subset of characters to be annotated to aid comprehension only where needed. Research has indicated that excessive diacritic marks can hinder skilled readers{---}reducing reading speed and accuracy. We conduct a behavioral experiment and show that partially marked text is often easier to read than fully marked text, and sometimes easier than plain text. In this light, we introduce Context-Contrastive Partial Diacritization ({`}CCPD{`}){---}a novel approach to {`}PD{`} which integrates seamlessly with existing Arabic diacritization systems. {`}CCPD{`} processes each word twice, once with context and once without, and diacritizes only the characters with disparities between the two inferences. Further, we introduce novel indicators for measuring partial diacritization quality to help establish this as a machine learning task. Lastly, we introduce {`}TD2{`}, a Transformer-variant of an established model which offers a markedly different performance profile on our proposed indicators compared to all other known systems.
|
[
"ElNokrashy, Muhammad",
"AlKhamissi, Badr"
] |
A Context-Contrastive Inference Approach To Partial Diacritization
|
arabicnlp-1.8
|
Poster
|
2401.08919v3
|
https://aclanthology.org/2024.arabicnlp-1.9.bib
|
@inproceedings{al-barham-etal-2024-araclip,
title = "{A}ra{CLIP}: Cross-Lingual Learning for Effective {A}rabic Image Retrieval",
author = "Al-Barham, Muhammad and
Afyouni, Imad and
Almubarak, Khalid and
Elnagar, Ashraf and
Turky, Ayad and
Hashem, Ibrahim",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.9",
pages = "102--110",
abstract = "This paper introduces Arabic Contrastive Language-Image Pre-training (AraCLIP), a model designed for Arabic image retrieval tasks, building upon the Contrastive Language-Image Pre-training (CLIP) architecture. AraCLIP leverages Knowledge Distillation to transfer cross-modal knowledge from English to Arabic, enhancing its ability to understand Arabic text and retrieve relevant images. Unlike existing multilingual models, AraCLIP is uniquely positioned to understand the intricacies of the Arabic language, including specific terms, cultural nuances, and contextual constructs. By leveraging the CLIP architecture as our foundation, we introduce a novel approach that seamlessly integrates textual and visual modalities, enabling AraCLIP to effectively retrieve images based on Arabic textual queries. We offer an online demonstration allowing users to input Arabic prompts and compare AraCLIP{'}s performance with state-of-the-art multilingual models. We conduct comprehensive experiments to evaluate AraCLIP{'}s performance across diverse datasets, including Arabic XTD-11, and Arabic Flicker 8k. Our results showcase AraCLIP{'}s superiority in image retrieval accuracy, demonstrating its effectiveness in handling Arabic queries. AraCLIP represents a significant advancement in cross-lingual image retrieval, offering promising applications in Arabic language processing and beyond.",
}
|
This paper introduces Arabic Contrastive Language-Image Pre-training (AraCLIP), a model designed for Arabic image retrieval tasks, building upon the Contrastive Language-Image Pre-training (CLIP) architecture. AraCLIP leverages Knowledge Distillation to transfer cross-modal knowledge from English to Arabic, enhancing its ability to understand Arabic text and retrieve relevant images. Unlike existing multilingual models, AraCLIP is uniquely positioned to understand the intricacies of the Arabic language, including specific terms, cultural nuances, and contextual constructs. By leveraging the CLIP architecture as our foundation, we introduce a novel approach that seamlessly integrates textual and visual modalities, enabling AraCLIP to effectively retrieve images based on Arabic textual queries. We offer an online demonstration allowing users to input Arabic prompts and compare AraCLIP{'}s performance with state-of-the-art multilingual models. We conduct comprehensive experiments to evaluate AraCLIP{'}s performance across diverse datasets, including Arabic XTD-11, and Arabic Flicker 8k. Our results showcase AraCLIP{'}s superiority in image retrieval accuracy, demonstrating its effectiveness in handling Arabic queries. AraCLIP represents a significant advancement in cross-lingual image retrieval, offering promising applications in Arabic language processing and beyond.
|
[
"Al-Barham, Muhammad",
"Afyouni, Imad",
"Almubarak, Khalid",
"Elnagar, Ashraf",
"Turky, Ayad",
"Hashem, Ibrahim"
] |
{A}ra{CLIP}: Cross-Lingual Learning for Effective {A}rabic Image Retrieval
|
arabicnlp-1.9
|
Poster
|
2006.11586v1
|
https://aclanthology.org/2024.arabicnlp-1.10.bib
|
@inproceedings{elfqih-monti-2024-large,
title = "Large Language Models as Legal Translators of {A}rabic Legislatives: Does {C}hat{GPT} and Gemini Care for Context and Terminology?",
author = "ElFqih, Khadija and
Monti, Johanna",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.10",
pages = "111--122",
abstract = "Accurate translation of terminology and adaptation to in-context information is a pillar to high quality translation. Recently, there is a remarkable interest towards the use and the evaluation of Large Language Models (LLMs) particularly for Machine Translation tasks. Nevertheless, despite their recent advancement and ability to understand and generate human-like language, these LLMs are still far from perfect, especially in domain-specific scenarios, and need to be thoroughly investigated. This is particularly evident in automatically translating legal terminology from Arabic into English and French, where, beyond the inherent complexities of legal language and specialised translations, technical limitations of LLMs further hinder accurate generation of text. In this paper, we present a preliminary evaluation of two evolving LLMs, namely GPT-4 Generative Pre-trained Transformer and Gemini, as legal translators of Arabic legislatives to test their accuracy and the extent to which they care for context and terminology across two language pairs (ARâEN / ARâFR). The study targets the evaluation of Zero-Shot prompting for in-context and out-of-context scenarios of both models relying on a gold standard dataset, verified by professional translators who are also experts in the field. We evaluate the results applying the Multidimensional Quality Metrics to classify translation errors. Moreover, we also evaluate the general LLMs outputs to verify their correctness, consistency, and completeness.In general, our results show that the models are far from perfect and recall for more fine-tuning efforts using specialised terminological data in the legal domain from Arabic into English and French.",
}
|
Accurate translation of terminology and adaptation to in-context information is a pillar to high quality translation. Recently, there is a remarkable interest towards the use and the evaluation of Large Language Models (LLMs) particularly for Machine Translation tasks. Nevertheless, despite their recent advancement and ability to understand and generate human-like language, these LLMs are still far from perfect, especially in domain-specific scenarios, and need to be thoroughly investigated. This is particularly evident in automatically translating legal terminology from Arabic into English and French, where, beyond the inherent complexities of legal language and specialised translations, technical limitations of LLMs further hinder accurate generation of text. In this paper, we present a preliminary evaluation of two evolving LLMs, namely GPT-4 Generative Pre-trained Transformer and Gemini, as legal translators of Arabic legislatives to test their accuracy and the extent to which they care for context and terminology across two language pairs (ARâEN / ARâFR). The study targets the evaluation of Zero-Shot prompting for in-context and out-of-context scenarios of both models relying on a gold standard dataset, verified by professional translators who are also experts in the field. We evaluate the results applying the Multidimensional Quality Metrics to classify translation errors. Moreover, we also evaluate the general LLMs outputs to verify their correctness, consistency, and completeness.In general, our results show that the models are far from perfect and recall for more fine-tuning efforts using specialised terminological data in the legal domain from Arabic into English and French.
|
[
"ElFqih, Khadija",
"Monti, Johanna"
] |
Large Language Models as Legal Translators of {A}rabic Legislatives: Does {C}hat{GPT} and Gemini Care for Context and Terminology?
|
arabicnlp-1.10
|
Poster
|
2308.03051v2
|
https://aclanthology.org/2024.arabicnlp-1.11.bib
|
@inproceedings{doan-etal-2024-towards,
title = "Towards Zero-Shot Text-To-Speech for {A}rabic Dialects",
author = "Doan, Khai and
Waheed, Abdul and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.11",
pages = "123--129",
abstract = "Zero-shot multi-speaker text-to-speech (ZS-TTS) systems have advanced for English, however, it still lags behind due to insufficient resources. We address this gap for Arabic, a language of more than 450 million native speakers, by first adapting a sizeable existing dataset to suit the needs of speech synthesis. Additionally, we employ a set of Arabic dialect identification models to explore the impact of pre-defined dialect labels on improving the ZS-TTS model in a multi-dialect setting. Subsequently, we fine-tune the XTTS model, an open-source architecture. We then evaluate our models on a dataset comprising 31 unseen speakers and an in-house dialectal dataset. Our automated and human evaluation results show convincing performance while capable of generating dialectal speech. Our study highlights significant potential for improvements in this emerging area of research in Arabic.",
}
|
Zero-shot multi-speaker text-to-speech (ZS-TTS) systems have advanced for English, however, it still lags behind due to insufficient resources. We address this gap for Arabic, a language of more than 450 million native speakers, by first adapting a sizeable existing dataset to suit the needs of speech synthesis. Additionally, we employ a set of Arabic dialect identification models to explore the impact of pre-defined dialect labels on improving the ZS-TTS model in a multi-dialect setting. Subsequently, we fine-tune the XTTS model, an open-source architecture. We then evaluate our models on a dataset comprising 31 unseen speakers and an in-house dialectal dataset. Our automated and human evaluation results show convincing performance while capable of generating dialectal speech. Our study highlights significant potential for improvements in this emerging area of research in Arabic.
|
[
"Doan, Khai",
"Waheed, Abdul",
"Abdul-Mageed, Muhammad"
] |
Towards Zero-Shot Text-To-Speech for {A}rabic Dialects
|
arabicnlp-1.11
|
Poster
|
2105.14779v2
|
https://aclanthology.org/2024.arabicnlp-1.12.bib
|
@inproceedings{mdhaffar-etal-2024-performance,
title = "Performance Analysis of Speech Encoders for Low-Resource {SLU} and {ASR} in {T}unisian Dialect",
author = "Mdhaffar, Salima and
Elleuch, Haroun and
Bougares, Fethi and
Est{\`e}ve, Yannick",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.12",
pages = "130--139",
abstract = "Speech encoders pretrained through self-supervised learning (SSL) have demonstrated remarkable performance in various downstream tasks, including Spoken Language Understanding (SLU) and Automatic Speech Recognition (ASR). For instance, fine-tuning SSL models for such tasks has shown significant potential, leading to improvements in the SOTA performance across challenging datasets.In contrast to existing research, this paper contributes by comparing the effectiveness of SSL approaches in the context of (i) the low-resource Spoken Tunisian Arabic Dialect and (ii) its combination with a low-resource SLU and ASR scenario, where only a few semantic annotations are available for fine-tuning. We conducted experiments using many SSL speech encoders on the TARIC-SLU dataset. We used speech encoders that were pre-trained on either monolingual or multilingual speech data. Some of them have also been refined without in-domain nor Tunisian data through a multimodal supervised teacher-student learning. The study made in this paper yields numerous significant findings that we will discuss in the paper.",
}
|
Speech encoders pretrained through self-supervised learning (SSL) have demonstrated remarkable performance in various downstream tasks, including Spoken Language Understanding (SLU) and Automatic Speech Recognition (ASR). For instance, fine-tuning SSL models for such tasks has shown significant potential, leading to improvements in the SOTA performance across challenging datasets.In contrast to existing research, this paper contributes by comparing the effectiveness of SSL approaches in the context of (i) the low-resource Spoken Tunisian Arabic Dialect and (ii) its combination with a low-resource SLU and ASR scenario, where only a few semantic annotations are available for fine-tuning. We conducted experiments using many SSL speech encoders on the TARIC-SLU dataset. We used speech encoders that were pre-trained on either monolingual or multilingual speech data. Some of them have also been refined without in-domain nor Tunisian data through a multimodal supervised teacher-student learning. The study made in this paper yields numerous significant findings that we will discuss in the paper.
|
[
"Mdhaffar, Salima",
"Elleuch, Haroun",
"Bougares, Fethi",
"Est{\\`e}ve, Yannick"
] |
Performance Analysis of Speech Encoders for Low-Resource {SLU} and {ASR} in {T}unisian Dialect
|
arabicnlp-1.12
|
Poster
|
2407.04533v2
|
https://aclanthology.org/2024.arabicnlp-1.13.bib
|
@inproceedings{el-shangiti-etal-2024-arabic,
title = "{A}rabic Automatic Story Generation with Large Language Models",
author = "El-Shangiti, Ahmed and
Alwajih, Fakhraddin and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.13",
pages = "140--152",
abstract = "Large language models (LLMs) have recently emerged as a powerful tool for a wide range of language generation tasks. Nevertheless, this progress has been slower in Arabic. In this work, we focus on the task of generating stories from LLMs. For our training, we use stories acquired through machine translation (MT) as well as GPT-4. For the MT data, we develop a careful pipeline that ensures we acquire high-quality stories. For our GPT-4 data, we introduce crafted prompts that allow us to generate data well-suited to the Arabic context in both Modern Standard Arabic (MSA) and two Arabic dialects (Egyptian and Moroccan). For example, we generate stories tailored to various Arab countries on a wide host of topics. Our manual evaluation shows that our model fine-tuned on these training datasets can generate coherent stories that adhere to our instructions. We also conduct an extensive automatic and human evaluation comparing our models against state-of-the-art proprietary and open-source models. Our datasets and models will be made publicly available at \url{https://github.com/UBC-NLP/arastories}.",
}
|
Large language models (LLMs) have recently emerged as a powerful tool for a wide range of language generation tasks. Nevertheless, this progress has been slower in Arabic. In this work, we focus on the task of generating stories from LLMs. For our training, we use stories acquired through machine translation (MT) as well as GPT-4. For the MT data, we develop a careful pipeline that ensures we acquire high-quality stories. For our GPT-4 data, we introduce crafted prompts that allow us to generate data well-suited to the Arabic context in both Modern Standard Arabic (MSA) and two Arabic dialects (Egyptian and Moroccan). For example, we generate stories tailored to various Arab countries on a wide host of topics. Our manual evaluation shows that our model fine-tuned on these training datasets can generate coherent stories that adhere to our instructions. We also conduct an extensive automatic and human evaluation comparing our models against state-of-the-art proprietary and open-source models. Our datasets and models will be made publicly available at \url{https://github.com/UBC-NLP/arastories}.
|
[
"El-Shangiti, Ahmed",
"Alwajih, Fakhraddin",
"Abdul-Mageed, Muhammad"
] |
{A}rabic Automatic Story Generation with Large Language Models
|
arabicnlp-1.13
|
Poster
|
2407.07551v1
|
https://aclanthology.org/2024.arabicnlp-1.14.bib
|
@inproceedings{ahmed-etal-2024-alclam,
title = "{A}lcla{M}: {A}rabic Dialect Language Model",
author = "Ahmed, Murtadha and
Alfasly, Saghir and
Wen, Bo and
Addeen, Jamal and
Ahmed, Mohammed and
Liu, Yunfeng",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.14",
pages = "153--159",
abstract = "Pre-trained Language Models (PLMs) are integral to many modern natural language processing (NLP) systems. Although multilingual models cover a wide range of languages, they often grapple with challenges like high inference costs and a lack of diverse non-English training data. Arabic-specific PLMs are trained predominantly on modern standard Arabic, which compromises their performance on regional dialects. To tackle this, we construct an Arabic dialectal corpus comprising 3.4M sentences gathered from social media platforms. We utilize this corpus to expand the vocabulary and retrain a BERT-based model from scratch. Named AlcLaM, our model was trained using only 13GB of text, which represents a fraction of the data used by existing models such as CAMeL, MARBERT, and ArBERT, compared to 7.8{\%}{\%}, and 21.3{\%}, respectively. Remarkably, AlcLaM demonstrates superior performance on a variety of Arabic NLP tasks despite the limited training data. AlcLaM is available at: https://github.com/amurtadha/Alclam.",
}
|
Pre-trained Language Models (PLMs) are integral to many modern natural language processing (NLP) systems. Although multilingual models cover a wide range of languages, they often grapple with challenges like high inference costs and a lack of diverse non-English training data. Arabic-specific PLMs are trained predominantly on modern standard Arabic, which compromises their performance on regional dialects. To tackle this, we construct an Arabic dialectal corpus comprising 3.4M sentences gathered from social media platforms. We utilize this corpus to expand the vocabulary and retrain a BERT-based model from scratch. Named AlcLaM, our model was trained using only 13GB of text, which represents a fraction of the data used by existing models such as CAMeL, MARBERT, and ArBERT, compared to 7.8{\%}{\%}, and 21.3{\%}, respectively. Remarkably, AlcLaM demonstrates superior performance on a variety of Arabic NLP tasks despite the limited training data. AlcLaM is available at: https://github.com/amurtadha/Alclam.
|
[
"Ahmed, Murtadha",
"Alfasly, Saghir",
"Wen, Bo",
"Addeen, Jamal",
"Ahmed, Mohammed",
"Liu, Yunfeng"
] |
{A}lcla{M}: {A}rabic Dialect Language Model
|
arabicnlp-1.14
|
Poster
|
2305.16651v1
|
https://aclanthology.org/2024.arabicnlp-1.15.bib
|
@inproceedings{shatnawi-etal-2024-data,
title = "Data Augmentation for Speech-Based Diacritic Restoration",
author = "Shatnawi, Sara and
Alqahtani, Sawsan and
Shehata, Shady and
Aldarmaki, Hanan",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.15",
pages = "160--169",
abstract = "This paper describes a data augmentation technique for boosting the performance of speech-based diacritic restoration. Our experiments demonstrate the utility of this appraoch, resulting in improved generalization of all models across different test sets. In addition, we describe the first multi-modal diacritic restoration model, utilizing both speech and text as input modalities. This type of model can be used to diacritize speech transcripts. Unlike previous work that relies on an external ASR model, the proposed model is far more compact and efficient. While the multi-modal framework does not surpass the ASR-based model for this task, it offers a promising approach for improving the efficiency of speech-based diacritization, with a potential for improvement using data augmentation and other methods.",
}
|
This paper describes a data augmentation technique for boosting the performance of speech-based diacritic restoration. Our experiments demonstrate the utility of this appraoch, resulting in improved generalization of all models across different test sets. In addition, we describe the first multi-modal diacritic restoration model, utilizing both speech and text as input modalities. This type of model can be used to diacritize speech transcripts. Unlike previous work that relies on an external ASR model, the proposed model is far more compact and efficient. While the multi-modal framework does not surpass the ASR-based model for this task, it offers a promising approach for improving the efficiency of speech-based diacritization, with a potential for improvement using data augmentation and other methods.
|
[
"Shatnawi, Sara",
"Alqahtani, Sawsan",
"Shehata, Shady",
"Aldarmaki, Hanan"
] |
Data Augmentation for Speech-Based Diacritic Restoration
|
arabicnlp-1.15
|
Poster
|
2311.10771v2
|
https://aclanthology.org/2024.arabicnlp-1.16.bib
|
@inproceedings{mokh-etal-2024-domain,
title = "Out-of-Domain Dependency Parsing for Dialects of {A}rabic: A Case Study",
author = {Mokh, Noor and
Dakota, Daniel and
K{\"u}bler, Sandra},
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.16",
pages = "170--182",
abstract = "We study dependency parsing for four Arabic dialects (Gulf, Levantine, Egyptian, and Maghrebi). Since no syntactically annotated data exist for Arabic dialects, we train the parser on a Modern Standard Arabic (MSA) corpus, which creates an out-of-domain setting.We investigate methods to close the gap between the source (MSA) and target data (dialects), e.g., by training on syntactically similar sentences to the test data. For testing, we manually annotate a small data set from a dialectal corpus. We focus on parsing two linguistic phenomena, which are difficult to parse: Idafa and coordination. We find that we can improve results by adding in-domain MSA data while adding dialectal embeddings only results in minor improvements.",
}
|
We study dependency parsing for four Arabic dialects (Gulf, Levantine, Egyptian, and Maghrebi). Since no syntactically annotated data exist for Arabic dialects, we train the parser on a Modern Standard Arabic (MSA) corpus, which creates an out-of-domain setting.We investigate methods to close the gap between the source (MSA) and target data (dialects), e.g., by training on syntactically similar sentences to the test data. For testing, we manually annotate a small data set from a dialectal corpus. We focus on parsing two linguistic phenomena, which are difficult to parse: Idafa and coordination. We find that we can improve results by adding in-domain MSA data while adding dialectal embeddings only results in minor improvements.
|
[
"Mokh, Noor",
"Dakota, Daniel",
"K{\\\"u}bler, S",
"ra"
] |
Out-of-Domain Dependency Parsing for Dialects of {A}rabic: A Case Study
|
arabicnlp-1.16
|
Poster
|
2005.00318v1
|
https://aclanthology.org/2024.arabicnlp-1.17.bib
|
@inproceedings{bassas-kubler-2024-investigating,
title = "Investigating Linguistic Features for {A}rabic {NLI}",
author = {Bassas, Yasmeen and
K{\"u}bler, Sandra},
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.17",
pages = "183--192",
abstract = "Native Language Identification (NLI) is concerned with predicting the native language of an author writing in a second language. We investigate NLI for Arabic, with a focus on the types of linguistic information given that Arabic is morphologically rich. We use the Arabic Learner Corpus (ALC) foro training and testing along with a linear SVM. We explore lexical, morpho-syntactic, and syntactic features. Results show that the best single type of information is character n-grams ranging from 2 to 6. Using this model, we achieve an accuracy of 61.84{\%}, thus outperforming previous results (Ionesco, 2015) by 11.74{\%} even though we use an additional 2 L1s. However, when using prefix and suffix sequences, we reach an accuracy of 53.95{\%}, showing that an approximation of unlexicalized features still reaches solid results.",
}
|
Native Language Identification (NLI) is concerned with predicting the native language of an author writing in a second language. We investigate NLI for Arabic, with a focus on the types of linguistic information given that Arabic is morphologically rich. We use the Arabic Learner Corpus (ALC) foro training and testing along with a linear SVM. We explore lexical, morpho-syntactic, and syntactic features. Results show that the best single type of information is character n-grams ranging from 2 to 6. Using this model, we achieve an accuracy of 61.84{\%}, thus outperforming previous results (Ionesco, 2015) by 11.74{\%} even though we use an additional 2 L1s. However, when using prefix and suffix sequences, we reach an accuracy of 53.95{\%}, showing that an approximation of unlexicalized features still reaches solid results.
|
[
"Bassas, Yasmeen",
"K{\\\"u}bler, S",
"ra"
] |
Investigating Linguistic Features for {A}rabic {NLI}
|
arabicnlp-1.17
|
Poster
|
2309.06923v1
|
https://aclanthology.org/2024.arabicnlp-1.18.bib
|
@inproceedings{demidova-etal-2024-john,
title = "John vs. Ahmed: Debate-Induced Bias in Multilingual {LLM}s",
author = "Demidova, Anastasiia and
Atwany, Hanin and
Rabih, Nour and
Sha{'}ban, Sanad and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.18",
pages = "193--209",
abstract = "Large language models (LLMs) play a crucial role in a wide range of real world applications. However, concerns about their safety and ethical implications are growing. While research on LLM safety is expanding, there is a noticeable gap in evaluating safety across multiple languages, especially in Arabic and Russian. We address this gap by exploring biases in LLMs across different languages and contexts, focusing on GPT-3.5 and Gemini. Through carefully designed argument-based prompts and scenarios in Arabic, English, and Russian, we examine biases in cultural, political, racial, religious, and gender domains. Our findings reveal biases in these domains. In particular, our investigation uncovers subtle biases where each model tends to present winners as those speaking the primary language the model is prompted with. Our study contributes to ongoing efforts to ensure justice and equality in LLM development and emphasizes the importance of further research towards responsible progress in this field.",
}
|
Large language models (LLMs) play a crucial role in a wide range of real world applications. However, concerns about their safety and ethical implications are growing. While research on LLM safety is expanding, there is a noticeable gap in evaluating safety across multiple languages, especially in Arabic and Russian. We address this gap by exploring biases in LLMs across different languages and contexts, focusing on GPT-3.5 and Gemini. Through carefully designed argument-based prompts and scenarios in Arabic, English, and Russian, we examine biases in cultural, political, racial, religious, and gender domains. Our findings reveal biases in these domains. In particular, our investigation uncovers subtle biases where each model tends to present winners as those speaking the primary language the model is prompted with. Our study contributes to ongoing efforts to ensure justice and equality in LLM development and emphasizes the importance of further research towards responsible progress in this field.
|
[
"Demidova, Anastasiia",
"Atwany, Hanin",
"Rabih, Nour",
"Sha{'}ban, Sanad",
"Abdul-Mageed, Muhammad"
] |
John vs. Ahmed: Debate-Induced Bias in Multilingual {LLM}s
|
arabicnlp-1.18
|
Poster
|
2402.18045v2
|
https://aclanthology.org/2024.arabicnlp-1.19.bib
|
@inproceedings{bhatia-etal-2024-qalam,
title = "Qalam: A Multimodal {LLM} for {A}rabic Optical Character and Handwriting Recognition",
author = "Bhatia, Gagan and
Nagoudi, El Moatez Billah and
Alwajih, Fakhraddin and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.19",
pages = "210--224",
abstract = "Arabic Optical Character Recognition (OCR) and Handwriting Recognition (HWR) pose unique challenges due to the cursive and context-sensitive nature of the Arabic script. This study introduces ***Qalam***, a novel foundation model designed for Arabic OCR and HWR, built on a SwinV2 encoder and RoBERTa decoder architecture. Our model significantly outperforms existing methods, achieving a Word Error Rate (WER) of just 0.80{\%} in HWR tasks and 1.18{\%} in OCR tasks. We train ***Qalam*** on a diverse dataset, including over 4.5 million images from Arabic manuscripts and a synthetic dataset comprising 60k image-text pairs. Notably, ***Qalam*** demonstrates exceptional handling of Arabic diacritics, a critical feature in Arabic scripts. Furthermore, it shows a remarkable ability to process high-resolution inputs, addressing a common limitation in current OCR systems. These advancements underscore ***Qalam***{'}s potential as a leading solution for Arabic script recognition, offering a significant leap in accuracy and efficiency.",
}
|
Arabic Optical Character Recognition (OCR) and Handwriting Recognition (HWR) pose unique challenges due to the cursive and context-sensitive nature of the Arabic script. This study introduces ***Qalam***, a novel foundation model designed for Arabic OCR and HWR, built on a SwinV2 encoder and RoBERTa decoder architecture. Our model significantly outperforms existing methods, achieving a Word Error Rate (WER) of just 0.80{\%} in HWR tasks and 1.18{\%} in OCR tasks. We train ***Qalam*** on a diverse dataset, including over 4.5 million images from Arabic manuscripts and a synthetic dataset comprising 60k image-text pairs. Notably, ***Qalam*** demonstrates exceptional handling of Arabic diacritics, a critical feature in Arabic scripts. Furthermore, it shows a remarkable ability to process high-resolution inputs, addressing a common limitation in current OCR systems. These advancements underscore ***Qalam***{'}s potential as a leading solution for Arabic script recognition, offering a significant leap in accuracy and efficiency.
|
[
"Bhatia, Gagan",
"Nagoudi, El Moatez Billah",
"Alwajih, Fakhraddin",
"Abdul-Mageed, Muhammad"
] |
Qalam: A Multimodal {LLM} for {A}rabic Optical Character and Handwriting Recognition
|
arabicnlp-1.19
|
Poster
|
2407.13559v1
|
https://aclanthology.org/2024.arabicnlp-1.20.bib
|
@inproceedings{hijazi-etal-2024-arablegaleval,
title = "{A}rab{L}egal{E}val: A Multitask Benchmark for Assessing {A}rabic Legal Knowledge in Large Language Models",
author = "Hijazi, Faris and
Alharbi, Somayah and
AlHussein, Abdulaziz and
Shairah, Harethah and
Alzahrani, Reem and
Alshamlan, Hebah and
Turkiyyah, George and
Knio, Omar",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.20",
pages = "225--249",
abstract = "The rapid advancements in Large Language Models (LLMs) have led to significant improvements in various natural language processing tasks. However, the evaluation of LLMs{'} legal knowledge, particularly in non English languages such as Arabic, remains under-explored. To address this gap, we introduce ArabLegalEval, a multitask benchmark dataset for assessing the Arabic legal knowledge of LLMs. Inspired by the MMLU and LegalBench datasets, ArabLegalEval consists of multiple tasks sourced from Saudi legal documents and synthesized questions. In this work, we aim to analyze the capabilities required to solve legal problems in Arabic and benchmark the performance of state-of-the-art LLMs. We explore the impact of in-context learning on performance and investigate various evaluation methods. Additionally, we explore workflows for automatically generating questions with automatic validation to enhance the dataset{'}s quality. By releasing ArabLegalEval and our code, we hope to accelerate AI research in the Arabic Legal domain",
}
|
The rapid advancements in Large Language Models (LLMs) have led to significant improvements in various natural language processing tasks. However, the evaluation of LLMs{'} legal knowledge, particularly in non English languages such as Arabic, remains under-explored. To address this gap, we introduce ArabLegalEval, a multitask benchmark dataset for assessing the Arabic legal knowledge of LLMs. Inspired by the MMLU and LegalBench datasets, ArabLegalEval consists of multiple tasks sourced from Saudi legal documents and synthesized questions. In this work, we aim to analyze the capabilities required to solve legal problems in Arabic and benchmark the performance of state-of-the-art LLMs. We explore the impact of in-context learning on performance and investigate various evaluation methods. Additionally, we explore workflows for automatically generating questions with automatic validation to enhance the dataset{'}s quality. By releasing ArabLegalEval and our code, we hope to accelerate AI research in the Arabic Legal domain
|
[
"Hijazi, Faris",
"Alharbi, Somayah",
"AlHussein, Abdulaziz",
"Shairah, Harethah",
"Alzahrani, Reem",
"Alshamlan, Hebah",
"Turkiyyah, George",
"Knio, Omar"
] |
{A}rab{L}egal{E}val: A Multitask Benchmark for Assessing {A}rabic Legal Knowledge in Large Language Models
|
arabicnlp-1.20
|
Poster
|
2402.12840v2
|
https://aclanthology.org/2024.arabicnlp-1.21.bib
|
@inproceedings{alasmary-etal-2024-catt,
title = "{CATT}: Character-based {A}rabic Tashkeel Transformer",
author = "Alasmary, Faris and
Zaafarani, Orjuwan and
Ghannam, Ahmad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.21",
pages = "250--257",
abstract = "Tashkeel, or Arabic Text Diacritization (ATD), greatly enhances the comprehension of Arabic text by removing ambiguity and minimizing the risk of misinterpretations caused by its absence.It plays a crucial role in improving Arabic text processing, particularly in applications such as text-to-speech and machine translation.This paper introduces a new approach to training ATD models.First, we finetuned two transformers, encoder-only and encoder-decoder, that were initialized from a pretrained character-based BERT.Then, we applied the Noisy-Student approach to boost the performance of the best model.We evaluated our models alongside 11 commercial and open-source models using two manually labeled benchmark datasets: WikiNews and our CATT dataset.Our findings show that our top model surpasses all evaluated models by relative Diacritic Error Rates (DERs) of 30.83{\%} and 35.21{\%} on WikiNews and CATT, respectively, achieving state-of-the-art in ATD.In addition, we show that our model outperforms GPT-4-turbo on CATT dataset by a relative DER of 9.36{\%}.We open-source our CATT models and benchmark dataset for the research community .",
}
|
Tashkeel, or Arabic Text Diacritization (ATD), greatly enhances the comprehension of Arabic text by removing ambiguity and minimizing the risk of misinterpretations caused by its absence.It plays a crucial role in improving Arabic text processing, particularly in applications such as text-to-speech and machine translation.This paper introduces a new approach to training ATD models.First, we finetuned two transformers, encoder-only and encoder-decoder, that were initialized from a pretrained character-based BERT.Then, we applied the Noisy-Student approach to boost the performance of the best model.We evaluated our models alongside 11 commercial and open-source models using two manually labeled benchmark datasets: WikiNews and our CATT dataset.Our findings show that our top model surpasses all evaluated models by relative Diacritic Error Rates (DERs) of 30.83{\%} and 35.21{\%} on WikiNews and CATT, respectively, achieving state-of-the-art in ATD.In addition, we show that our model outperforms GPT-4-turbo on CATT dataset by a relative DER of 9.36{\%}.We open-source our CATT models and benchmark dataset for the research community .
|
[
"Alasmary, Faris",
"Zaafarani, Orjuwan",
"Ghannam, Ahmad"
] |
{CATT}: Character-based {A}rabic Tashkeel Transformer
|
arabicnlp-1.21
|
Poster
|
2407.03236v3
|
https://aclanthology.org/2024.arabicnlp-1.22.bib
|
@inproceedings{khalifa-etal-2024-picking,
title = "Picking Up Where the Linguist Left Off: Mapping Morphology to Phonology through Learning the Residuals",
author = "Khalifa, Salam and
Qaddoumi, Abdelrahim and
Broselow, Ellen and
Rambow, Owen",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.22",
pages = "258--264",
abstract = "Learning morphophonological mappings between the spoken form of a language and its underlying morphological structures is crucial for enriching resources for morphologically rich languages like Arabic. In this work, we focus on Egyptian Arabic as our case study and explore the integration of linguistic knowledge with a neural transformer model. Our approach involves learning to correct the residual errors from hand-crafted rules to predict the spoken form from a given underlying morphological representation. We demonstrate that using a minimal set of rules, we can effectively recover errors even in very low-resource settings.",
}
|
Learning morphophonological mappings between the spoken form of a language and its underlying morphological structures is crucial for enriching resources for morphologically rich languages like Arabic. In this work, we focus on Egyptian Arabic as our case study and explore the integration of linguistic knowledge with a neural transformer model. Our approach involves learning to correct the residual errors from hand-crafted rules to predict the spoken form from a given underlying morphological representation. We demonstrate that using a minimal set of rules, we can effectively recover errors even in very low-resource settings.
|
[
"Khalifa, Salam",
"Qaddoumi, Abdelrahim",
"Broselow, Ellen",
"Rambow, Owen"
] |
Picking Up Where the Linguist Left Off: Mapping Morphology to Phonology through Learning the Residuals
|
arabicnlp-1.22
|
Poster
|
9607013v1
|
https://aclanthology.org/2024.arabicnlp-1.23.bib
|
@inproceedings{alcoba-inciarte-etal-2024-utility,
title = "On the Utility of Pretraining Language Models on Synthetic Data",
author = "Alcoba Inciarte, Alcides and
Kwon, Sang Yun and
Nagoudi, El Moatez Billah and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.23",
pages = "265--282",
abstract = "Development of pre-trained language models has predominantly relied on large amounts of datasets. However, this dependence on abundant data has limited the applicability of these models in low-resource settings. In this work, we investigate the utility of exploiting synthetic datasets acquired from different sources to pre-train language models for Arabic. Namely, we leverage data derived based on four different methods: optical character recognition (OCR), automatic speech recognition (ASR), machine translation (MT), and generative language models. We use these datasets to pre-train models in three different architectures: encoder-only (BERTBase), encoder-decoder (T5), and decoder-only (GPT-2). We test the capabilities of resulting models on Arabic natural language understanding (NLU) tasks using the ORCA benchmark. Our results show that utilizing synthetic data can achieve performance comparable to, or even surpassing, those trained on gold data. For example, our model based on a GPT-2 architecture trained on a combined synthetic dataset surpasses the baseline model ARBERTv2. Overall, our models pre-trained on synthetic data demonstrate robust performance across various tasks. This highlights the potential of synthetic datasets in augmenting language model training in low-resource settings.",
}
|
Development of pre-trained language models has predominantly relied on large amounts of datasets. However, this dependence on abundant data has limited the applicability of these models in low-resource settings. In this work, we investigate the utility of exploiting synthetic datasets acquired from different sources to pre-train language models for Arabic. Namely, we leverage data derived based on four different methods: optical character recognition (OCR), automatic speech recognition (ASR), machine translation (MT), and generative language models. We use these datasets to pre-train models in three different architectures: encoder-only (BERTBase), encoder-decoder (T5), and decoder-only (GPT-2). We test the capabilities of resulting models on Arabic natural language understanding (NLU) tasks using the ORCA benchmark. Our results show that utilizing synthetic data can achieve performance comparable to, or even surpassing, those trained on gold data. For example, our model based on a GPT-2 architecture trained on a combined synthetic dataset surpasses the baseline model ARBERTv2. Overall, our models pre-trained on synthetic data demonstrate robust performance across various tasks. This highlights the potential of synthetic datasets in augmenting language model training in low-resource settings.
|
[
"Alcoba Inciarte, Alcides",
"Kwon, Sang Yun",
"Nagoudi, El Moatez Billah",
"Abdul-Mageed, Muhammad"
] |
On the Utility of Pretraining Language Models on Synthetic Data
|
arabicnlp-1.23
|
Poster
|
2403.13638v2
|
https://aclanthology.org/2024.arabicnlp-1.24.bib
|
@inproceedings{khondaker-etal-2024-benchmarking,
title = "Benchmarking {LL}a{MA}-3 on {A}rabic Language Generation Tasks",
author = "Khondaker, Md Tawkat Islam and
Naeem, Numaan and
Khan, Fatimah and
Elmadany, AbdelRahim and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.24",
pages = "283--297",
abstract = "Open-sourced large language models (LLMs) have exhibited remarkable performance in a variety of NLP tasks, often catching up with the closed-sourced LLMs like ChatGPT. Among these open LLMs, LLaMA-3-70B has emerged as the most recent and the most prominent one. However, how LLaMA-3-70B would situate itself in multilingual settings, especially in a rich morphological language like Arabic, has yet to be explored. In this work, we focus to bridge this gap by evaluating LLaMA-3-70B on a diverse set of Arabic natural language generation (NLG) benchmarks. To the best of our knowledge, this is the first study that comprehensively evaluates LLaMA-3-70B on tasks related to Arabic natural language generation. Our study reveals that LLaMA-3-70B lags behind the closed LLMs like ChatGPT, both in modern standard Arabic (MSA) and dialectal Arabic (DA). We further compare the performance of LLaMA-3-70B with our smaller and dedicated finetuned Arabic models. We find that both LLaMA-3-70B and ChatGPT are outperformed by comparatively smaller dedicated Arabic models, indicating the scope for potential improvement with Arabic-focused LLMs.",
}
|
Open-sourced large language models (LLMs) have exhibited remarkable performance in a variety of NLP tasks, often catching up with the closed-sourced LLMs like ChatGPT. Among these open LLMs, LLaMA-3-70B has emerged as the most recent and the most prominent one. However, how LLaMA-3-70B would situate itself in multilingual settings, especially in a rich morphological language like Arabic, has yet to be explored. In this work, we focus to bridge this gap by evaluating LLaMA-3-70B on a diverse set of Arabic natural language generation (NLG) benchmarks. To the best of our knowledge, this is the first study that comprehensively evaluates LLaMA-3-70B on tasks related to Arabic natural language generation. Our study reveals that LLaMA-3-70B lags behind the closed LLMs like ChatGPT, both in modern standard Arabic (MSA) and dialectal Arabic (DA). We further compare the performance of LLaMA-3-70B with our smaller and dedicated finetuned Arabic models. We find that both LLaMA-3-70B and ChatGPT are outperformed by comparatively smaller dedicated Arabic models, indicating the scope for potential improvement with Arabic-focused LLMs.
|
[
"Khondaker, Md Tawkat Islam",
"Naeem, Numaan",
"Khan, Fatimah",
"Elmadany, AbdelRahim",
"Abdul-Mageed, Muhammad"
] |
Benchmarking {LL}a{MA}-3 on {A}rabic Language Generation Tasks
|
arabicnlp-1.24
|
Poster
|
2205.10687v1
|
https://aclanthology.org/2024.arabicnlp-1.25.bib
|
@inproceedings{saeed-etal-2024-nile,
title = "From Nile Sands to Digital Hands: Machine Translation of {C}optic Texts",
author = "Saeed, Muhammed and
Mohamed, Asim and
Mohamed, Mukhtar and
Shehata, Shady and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.25",
pages = "298--308",
abstract = "The Coptic language, rooted in the historical landscapes of Egypt, continues to serve as a vital liturgical medium for the Coptic Orthodox and Catholic Churches across Egypt, North Sudan, Libya, and the United States, with approximately ten million speakers worldwide. However, the scarcity of digital resources in Coptic has resulted in its exclusion from digital systems, thereby limiting its accessibility and preservation in modern technological contexts. Our research addresses this issue by developing the most extensive parallel Coptic-centered corpus to date. This corpus comprises over 8,000 parallel sentences between Arabic and Coptic, and more than 24,000 parallel sentences between English and Coptic. We have also developed the first neural machine translation system between Coptic, English, and Arabic. Lastly, we evaluate the capability of leading proprietary Large Language Models (LLMs) to translate to and from Coptic using a few-shot learning approach (in-context learning). Our code and data are available at \url{https://github.com/UBC-NLP/copticmt}.",
}
|
The Coptic language, rooted in the historical landscapes of Egypt, continues to serve as a vital liturgical medium for the Coptic Orthodox and Catholic Churches across Egypt, North Sudan, Libya, and the United States, with approximately ten million speakers worldwide. However, the scarcity of digital resources in Coptic has resulted in its exclusion from digital systems, thereby limiting its accessibility and preservation in modern technological contexts. Our research addresses this issue by developing the most extensive parallel Coptic-centered corpus to date. This corpus comprises over 8,000 parallel sentences between Arabic and Coptic, and more than 24,000 parallel sentences between English and Coptic. We have also developed the first neural machine translation system between Coptic, English, and Arabic. Lastly, we evaluate the capability of leading proprietary Large Language Models (LLMs) to translate to and from Coptic using a few-shot learning approach (in-context learning). Our code and data are available at \url{https://github.com/UBC-NLP/copticmt}.
|
[
"Saeed, Muhammed",
"Mohamed, Asim",
"Mohamed, Mukhtar",
"Shehata, Shady",
"Abdul-Mageed, Muhammad"
] |
From Nile Sands to Digital Hands: Machine Translation of {C}optic Texts
|
arabicnlp-1.25
|
Poster
|
1912.05082v3
|
https://aclanthology.org/2024.arabicnlp-1.26.bib
|
@inproceedings{aljabari-etal-2024-event,
title = "Event-Arguments Extraction Corpus and Modeling using {BERT} for {A}rabic",
author = "Aljabari, Alaa and
Duaibes, Lina and
Jarrar, Mustafa and
Khalilia, Mohammed",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.26",
pages = "309--319",
abstract = "Event-argument extraction is a challenging task, particularly in Arabic due to sparse linguistic resources. To fill this gap, we introduce the corpus (550k tokens) as an extension of Wojood, enriched with event-argument annotations. We used three types of event arguments: $agent$, $location$, and $date$, which we annotated as relation types. Our inter-annotator agreement evaluation resulted in 82.23{\%} $Kappa$ score and 87.2{\%} $F_1$-score. Additionally, we propose a novel method for event relation extraction using BERT, in which we treat the task as text entailment. This method achieves an $F_1$-score of 94.01{\%}.To further evaluate the generalization of our proposed method, we collected and annotated another out-of-domain corpus (about 80k tokens) called and used it as a second test set, on which our approach achieved promising results (83.59{\%} $F_1$-score). Last but not least, we propose an end-to-end system for event-arguments extraction. This system is implemented as part of SinaTools, and both corpora are publicly available at \url{https://sina.birzeit.edu/wojood}",
}
|
Event-argument extraction is a challenging task, particularly in Arabic due to sparse linguistic resources. To fill this gap, we introduce the corpus (550k tokens) as an extension of Wojood, enriched with event-argument annotations. We used three types of event arguments: $agent$, $location$, and $date$, which we annotated as relation types. Our inter-annotator agreement evaluation resulted in 82.23{\%} $Kappa$ score and 87.2{\%} $F_1$-score. Additionally, we propose a novel method for event relation extraction using BERT, in which we treat the task as text entailment. This method achieves an $F_1$-score of 94.01{\%}.To further evaluate the generalization of our proposed method, we collected and annotated another out-of-domain corpus (about 80k tokens) called and used it as a second test set, on which our approach achieved promising results (83.59{\%} $F_1$-score). Last but not least, we propose an end-to-end system for event-arguments extraction. This system is implemented as part of SinaTools, and both corpora are publicly available at \url{https://sina.birzeit.edu/wojood}
|
[
"Aljabari, Alaa",
"Duaibes, Lina",
"Jarrar, Mustafa",
"Khalilia, Mohammed"
] |
Event-Arguments Extraction Corpus and Modeling using {BERT} for {A}rabic
|
arabicnlp-1.26
|
Poster
|
2004.14135v1
|
https://aclanthology.org/2024.arabicnlp-1.27.bib
|
@inproceedings{alwajih-etal-2024-dallah,
title = "Dallah: A Dialect-Aware Multimodal Large Language Model for {A}rabic",
author = "Alwajih, Fakhraddin and
Bhatia, Gagan and
Abdul-Mageed, Muhammad",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.27",
pages = "320--336",
abstract = "Recent advancements have significantly enhanced the capabilities of Multimodal Large Language Models (MLLMs) in generating and understanding image-to-text content. Despite these successes, progress is predominantly limited to English due to the scarcity of high-quality multimodal resources in other languages. This limitation impedes the development of competitive models in languages such as Arabic. To alleviate this situation, we introduce an efficient Arabic multimodal assistant, dubbed ***Dallah***, that utilizes an advanced language model based on LLaMA-2 to facilitate multimodal interactions. ***Dallah*** demonstrates state-of-the-art performance in Arabic MLLMs. Through fine-tuning six Arabic dialects, ***Dallah*** showcases its capability to handle complex dialectal interactions incorporating both textual and visual elements. The model excels in two benchmark tests: one evaluating its performance on Modern Standard Arabic (MSA) and another specifically designed to assess dialectal responses. Beyond its robust performance in multimodal interaction tasks, ***Dallah*** has the potential to pave the way for further development of dialect-aware Arabic MLLMs.",
}
|
Recent advancements have significantly enhanced the capabilities of Multimodal Large Language Models (MLLMs) in generating and understanding image-to-text content. Despite these successes, progress is predominantly limited to English due to the scarcity of high-quality multimodal resources in other languages. This limitation impedes the development of competitive models in languages such as Arabic. To alleviate this situation, we introduce an efficient Arabic multimodal assistant, dubbed ***Dallah***, that utilizes an advanced language model based on LLaMA-2 to facilitate multimodal interactions. ***Dallah*** demonstrates state-of-the-art performance in Arabic MLLMs. Through fine-tuning six Arabic dialects, ***Dallah*** showcases its capability to handle complex dialectal interactions incorporating both textual and visual elements. The model excels in two benchmark tests: one evaluating its performance on Modern Standard Arabic (MSA) and another specifically designed to assess dialectal responses. Beyond its robust performance in multimodal interaction tasks, ***Dallah*** has the potential to pave the way for further development of dialect-aware Arabic MLLMs.
|
[
"Alwajih, Fakhraddin",
"Bhatia, Gagan",
"Abdul-Mageed, Muhammad"
] |
Dallah: A Dialect-Aware Multimodal Large Language Model for {A}rabic
|
arabicnlp-1.27
|
Poster
|
2407.18129v2
|
https://aclanthology.org/2024.arabicnlp-1.28.bib
|
@inproceedings{bashendy-etal-2024-qaes,
title = "{QAES}: First Publicly-Available Trait-Specific Annotations for Automated Scoring of {A}rabic Essays",
author = "Bashendy, May and
Albatarni, Salam and
Eltanbouly, Sohaila and
Zahran, Eman and
Elhuseyin, Hamdo and
Elsayed, Tamer and
Massoud, Walid and
Bouamor, Houda",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.28",
pages = "337--351",
abstract = "Automated Essay Scoring (AES) has emerged as a significant research problem within natural language processing, providing valuable support for educators in assessing student writing skills. In this paper, we introduce QAES, the first publicly available trait-specific annotations for Arabic AES, built on the Qatari Corpus of Argumentative Writing (QCAW). QAES includes a diverse collection of essays in Arabic, each of them annotated with holistic and trait-specific scores, including relevance, organization, vocabulary, style, development, mechanics, and grammar. In total, it comprises 195 Arabic essays (with lengths ranging from 239 to 806 words) across two distinct argumentative writing tasks. We benchmark our dataset against the state-of-the-art English baselines and a feature-based approach. In addition, we discuss the adopted guidelines and the challenges encountered during the annotation process. Finally, we provide insights into potential areas for improvement and future directions in Arabic AES research.",
}
|
Automated Essay Scoring (AES) has emerged as a significant research problem within natural language processing, providing valuable support for educators in assessing student writing skills. In this paper, we introduce QAES, the first publicly available trait-specific annotations for Arabic AES, built on the Qatari Corpus of Argumentative Writing (QCAW). QAES includes a diverse collection of essays in Arabic, each of them annotated with holistic and trait-specific scores, including relevance, organization, vocabulary, style, development, mechanics, and grammar. In total, it comprises 195 Arabic essays (with lengths ranging from 239 to 806 words) across two distinct argumentative writing tasks. We benchmark our dataset against the state-of-the-art English baselines and a feature-based approach. In addition, we discuss the adopted guidelines and the challenges encountered during the annotation process. Finally, we provide insights into potential areas for improvement and future directions in Arabic AES research.
|
[
"Bashendy, May",
"Albatarni, Salam",
"Eltanbouly, Sohaila",
"Zahran, Eman",
"Elhuseyin, Hamdo",
"Elsayed, Tamer",
"Massoud, Walid",
"Bouamor, Houda"
] |
{QAES}: First Publicly-Available Trait-Specific Annotations for Automated Scoring of {A}rabic Essays
|
arabicnlp-1.28
|
Poster
|
2407.11212v1
|
https://aclanthology.org/2024.arabicnlp-1.29.bib
|
@inproceedings{ferhat-etal-2024-functional,
title = "Functional Text Dimensions for {A}rabic Text Classification",
author = "Ferhat, Zeyd and
Betka, Abir and
Barka, Riyadh and
Kahhoul, Zineddine and
Boutiba, Selma and
Tiar, Mohamed and
Dahmani, Habiba and
Abdelali, Ahmed",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.29",
pages = "352--360",
abstract = "Text classification is of paramount importance in a wide range of applications, including information retrieval, extraction and sentiment analysis. The challenge of classifying and labelling text genres, especially in web-based corpora, has received considerable attention. The frequent absence of unambiguous genre information complicates the identification of text types. To address these issues, the Functional Text Dimensions (FTD) method has been introduced to provide a universal set of categories for text classification. This study presents the Arabic Functional Text Dimensions Corpus (AFTD Corpus), a carefully curated collection of documents for evaluating text classification in Arabic. The AFTD Corpus which we are making available to the community, consists of 3400 documents spanning 17 different class categories. Through a comprehensive evaluation using traditional machine learning and neural models, we assess the effectiveness of the FTD approach in the Arabic context. CAMeLBERT, a state-of-the-art model, achieved an impressive F1 score of 0.81 on our corpus. This research highlights the potential of the FTD method for improving text classification, especially for Arabic content, and underlines the importance of robust classification models in web applications.",
}
|
Text classification is of paramount importance in a wide range of applications, including information retrieval, extraction and sentiment analysis. The challenge of classifying and labelling text genres, especially in web-based corpora, has received considerable attention. The frequent absence of unambiguous genre information complicates the identification of text types. To address these issues, the Functional Text Dimensions (FTD) method has been introduced to provide a universal set of categories for text classification. This study presents the Arabic Functional Text Dimensions Corpus (AFTD Corpus), a carefully curated collection of documents for evaluating text classification in Arabic. The AFTD Corpus which we are making available to the community, consists of 3400 documents spanning 17 different class categories. Through a comprehensive evaluation using traditional machine learning and neural models, we assess the effectiveness of the FTD approach in the Arabic context. CAMeLBERT, a state-of-the-art model, achieved an impressive F1 score of 0.81 on our corpus. This research highlights the potential of the FTD method for improving text classification, especially for Arabic content, and underlines the importance of robust classification models in web applications.
|
[
"Ferhat, Zeyd",
"Betka, Abir",
"Barka, Riyadh",
"Kahhoul, Zineddine",
"Boutiba, Selma",
"Tiar, Mohamed",
"Dahmani, Habiba",
"Abdelali, Ahmed"
] |
Functional Text Dimensions for {A}rabic Text Classification
|
arabicnlp-1.29
|
Poster
|
2006.11586v1
|
https://aclanthology.org/2024.arabicnlp-1.30.bib
|
@inproceedings{khalilia-etal-2024-arabicnlu,
title = "{A}rabic{NLU} 2024: The First {A}rabic Natural Language Understanding Shared Task",
author = "Khalilia, Mohammed and
Malaysha, Sanad and
Suwaileh, Reem and
Jarrar, Mustafa and
Aljabari, Alaa and
Elsayed, Tamer and
Zitouni, Imed",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.30",
pages = "361--371",
abstract = "This paper presents an overview of the Arabic Natural Language Understanding (ArabicNLU 2024) shared task, focusing on two subtasks: Word Sense Disambiguation (WSD) and Location Mention Disambiguation (LMD). The task aimed to evaluate the ability of automated systems to resolve word ambiguity and identify locations mentioned in Arabic text. We provided participants with novel datasets, including a sense-annotated corpus for WSD, called SALMA with approximately 34k annotated tokens, and the dataset with 3,893 annotations and 763 unique location mentions. These are challenging tasks. Out of the 38 registered teams, only three teams participated in the final evaluation phase, with the highest accuracy being 77.8{\%} for WSD and 95.0{\%} for LMD. The shared task not only facilitated the evaluation and comparison of different techniques, but also provided valuable insights and resources for the continued advancement of Arabic NLU technologies.",
}
|
This paper presents an overview of the Arabic Natural Language Understanding (ArabicNLU 2024) shared task, focusing on two subtasks: Word Sense Disambiguation (WSD) and Location Mention Disambiguation (LMD). The task aimed to evaluate the ability of automated systems to resolve word ambiguity and identify locations mentioned in Arabic text. We provided participants with novel datasets, including a sense-annotated corpus for WSD, called SALMA with approximately 34k annotated tokens, and the dataset with 3,893 annotations and 763 unique location mentions. These are challenging tasks. Out of the 38 registered teams, only three teams participated in the final evaluation phase, with the highest accuracy being 77.8{\%} for WSD and 95.0{\%} for LMD. The shared task not only facilitated the evaluation and comparison of different techniques, but also provided valuable insights and resources for the continued advancement of Arabic NLU technologies.
|
[
"Khalilia, Mohammed",
"Malaysha, Sanad",
"Suwaileh, Reem",
"Jarrar, Mustafa",
"Aljabari, Alaa",
"Elsayed, Tamer",
"Zitouni, Imed"
] |
{A}rabic{NLU} 2024: The First {A}rabic Natural Language Understanding Shared Task
|
arabicnlp-1.30
|
Poster
|
2407.20663v1
|
https://aclanthology.org/2024.arabicnlp-1.31.bib
|
@inproceedings{wael-etal-2024-pirates,
title = "Pirates at {A}rabic{NLU}2024: Enhancing {A}rabic Word Sense Disambiguation using Transformer-Based Approaches",
author = "Wael, Tasneem and
Elrefai, Eman and
Makram, Mohamed and
Selim, Sahar and
Khoriba, Ghada",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.31",
pages = "372--376",
abstract = "This paper presents a novel approach to Ara-bic Word Sense Disambiguation (WSD) lever-aging transformer-based models to tackle thecomplexities of the Arabic language. Utiliz-ing the SALMA dataset, we applied severaltechniques, including Sentence Transformerswith Siamese networks and the SetFit frame-work optimized for few-shot learning. Our ex-periments, structured around a robust evalua-tion framework, achieved a promising F1-scoreof up to 71{\%}, securing second place in theArabicNLU 2024: The First Arabic NaturalLanguage Understanding Shared Task compe-tition. These results demonstrate the efficacyof our approach, especially in dealing with thechallenges posed by homophones, homographs,and the lack of diacritics in Arabic texts. Theproposed methods significantly outperformedtraditional WSD techniques, highlighting theirpotential to enhance the accuracy of Arabicnatural language processing applications.",
}
|
This paper presents a novel approach to Ara-bic Word Sense Disambiguation (WSD) lever-aging transformer-based models to tackle thecomplexities of the Arabic language. Utiliz-ing the SALMA dataset, we applied severaltechniques, including Sentence Transformerswith Siamese networks and the SetFit frame-work optimized for few-shot learning. Our ex-periments, structured around a robust evalua-tion framework, achieved a promising F1-scoreof up to 71{\%}, securing second place in theArabicNLU 2024: The First Arabic NaturalLanguage Understanding Shared Task compe-tition. These results demonstrate the efficacyof our approach, especially in dealing with thechallenges posed by homophones, homographs,and the lack of diacritics in Arabic texts. Theproposed methods significantly outperformedtraditional WSD techniques, highlighting theirpotential to enhance the accuracy of Arabicnatural language processing applications.
|
[
"Wael, Tasneem",
"Elrefai, Eman",
"Makram, Mohamed",
"Selim, Sahar",
"Khoriba, Ghada"
] |
Pirates at {A}rabic{NLU}2024: Enhancing {A}rabic Word Sense Disambiguation using Transformer-Based Approaches
|
arabicnlp-1.31
|
Poster
|
2104.08110v1
|
https://aclanthology.org/2024.arabicnlp-1.32.bib
|
@inproceedings{rajpoot-etal-2024-upaya,
title = "Upaya at {A}rabic{NLU} Shared-Task: {A}rabic Lexical Disambiguation using Large Language Models",
author = "Rajpoot, Pawan and
Jindal, Ashvini and
Parikh, Ankur",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.32",
pages = "377--382",
abstract = "Disambiguating a word{'}s intended meaning(sense) in a given context is important in Nat-ural Language Understanding (NLU). WSDaims to determine the correct sense of ambigu-ous words in context. At the same time, LMD(a WSD variation) focuses on disambiguatinglocation mention. Both tasks are vital in Nat-ural Language Processing (NLP) and informa-tion retrieval, as they help correctly interpretand extract information from text. Arabic ver-sion is further challenging because of its mor-phological richness, encompassing a complexinterplay of roots, stems, and affixes. This pa-per describes our solutions to both tasks, em-ploying Llama3 and Cohere-based models un-der Zero-Shot Learning and Re-Ranking, re-spectively. Both the shared tasks were partof the second Arabic Natural Language Pro-cessing Conference co-located with ACL 2024.Overall, we achieved 1st rank in the WSD task(accuracy 78{\%}) and 2nd rank in the LMD task(MRR@1 0.59)",
}
|
Disambiguating a word{'}s intended meaning(sense) in a given context is important in Nat-ural Language Understanding (NLU). WSDaims to determine the correct sense of ambigu-ous words in context. At the same time, LMD(a WSD variation) focuses on disambiguatinglocation mention. Both tasks are vital in Nat-ural Language Processing (NLP) and informa-tion retrieval, as they help correctly interpretand extract information from text. Arabic ver-sion is further challenging because of its mor-phological richness, encompassing a complexinterplay of roots, stems, and affixes. This pa-per describes our solutions to both tasks, em-ploying Llama3 and Cohere-based models un-der Zero-Shot Learning and Re-Ranking, re-spectively. Both the shared tasks were partof the second Arabic Natural Language Pro-cessing Conference co-located with ACL 2024.Overall, we achieved 1st rank in the WSD task(accuracy 78{\%}) and 2nd rank in the LMD task(MRR@1 0.59)
|
[
"Rajpoot, Pawan",
"Jindal, Ashvini",
"Parikh, Ankur"
] |
Upaya at {A}rabic{NLU} Shared-Task: {A}rabic Lexical Disambiguation using Large Language Models
|
arabicnlp-1.32
|
Poster
|
9410029v1
|
https://aclanthology.org/2024.arabicnlp-1.33.bib
|
@inproceedings{abdel-salam-2024-rematchka,
title = "rematchka at {A}rabic{NLU}2024: Evaluating Large Language Models for {A}rabic Word Sense and Location Sense Disambiguation",
author = "Abdel-Salam, Reem",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.33",
pages = "383--392",
abstract = "Natural Language Understanding (NLU) plays a vital role in Natural Language Processing (NLP) by facilitating semantic interactions. Arabic, with its diverse morphology, poses a challenge as it allows multiple interpretations of words, leading to potential misunderstandings and errors in NLP applications. In this paper, we present our approach for tackling Arabic NLU shared tasks for word sense disambiguation (WSD) and location mention disambiguation (LMD). Various approaches have been investigated from zero-shot inference of large language models (LLMs) to fine-tuning of pre-trained language models (PLMs). The best approach achieved 57{\%} on WSD task ranking third place, while for the LMD task, our best systems achieved 94{\%} MRR@1 ranking first place.",
}
|
Natural Language Understanding (NLU) plays a vital role in Natural Language Processing (NLP) by facilitating semantic interactions. Arabic, with its diverse morphology, poses a challenge as it allows multiple interpretations of words, leading to potential misunderstandings and errors in NLP applications. In this paper, we present our approach for tackling Arabic NLU shared tasks for word sense disambiguation (WSD) and location mention disambiguation (LMD). Various approaches have been investigated from zero-shot inference of large language models (LLMs) to fine-tuning of pre-trained language models (PLMs). The best approach achieved 57{\%} on WSD task ranking third place, while for the LMD task, our best systems achieved 94{\%} MRR@1 ranking first place.
|
[
"Abdel-Salam, Reem"
] |
rematchka at {A}rabic{NLU}2024: Evaluating Large Language Models for {A}rabic Word Sense and Location Sense Disambiguation
|
arabicnlp-1.33
|
Poster
|
2104.08110v1
|
https://aclanthology.org/2024.arabicnlp-1.34.bib
|
@inproceedings{malaysha-etal-2024-arafinnlp,
title = "{A}ra{F}in{NLP} 2024: The First {A}rabic Financial {NLP} Shared Task",
author = "Malaysha, Sanad and
El-Haj, Mo and
Ezzini, Saad and
Khalilia, Mohammed and
Jarrar, Mustafa and
Almujaiwel, Sultan and
Berrada, Ismail and
Bouamor, Houda",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.34",
pages = "393--402",
abstract = "The expanding financial markets of the Arab world require sophisticated Arabic NLP tools. To address this need within the banking domain, the Arabic Financial NLP (AraFinNLP) shared task proposes two subtasks: (i) Multi-dialect Intent Detection and (ii) Cross-dialect Translation and Intent Preservation. This shared task uses the updated ArBanking77 dataset, which includes about 39k parallel queries in MSA and four dialects. Each query is labeled with one or more of a common 77 intents in the banking domain. These resources aim to foster the development of robust financial Arabic NLP, particularly in the areas of machine translation and banking chat-bots.A total of 45 unique teams registered for this shared task, with 11 of them actively participated in the test phase. Specifically, 11 teams participated in Subtask 1, while only 1 team participated in Subtask 2. The winning team of Subtask 1 achieved F1 score of 0.8773, and the only team submitted in Subtask 2 achieved a 1.667 BLEU score.",
}
|
The expanding financial markets of the Arab world require sophisticated Arabic NLP tools. To address this need within the banking domain, the Arabic Financial NLP (AraFinNLP) shared task proposes two subtasks: (i) Multi-dialect Intent Detection and (ii) Cross-dialect Translation and Intent Preservation. This shared task uses the updated ArBanking77 dataset, which includes about 39k parallel queries in MSA and four dialects. Each query is labeled with one or more of a common 77 intents in the banking domain. These resources aim to foster the development of robust financial Arabic NLP, particularly in the areas of machine translation and banking chat-bots.A total of 45 unique teams registered for this shared task, with 11 of them actively participated in the test phase. Specifically, 11 teams participated in Subtask 1, while only 1 team participated in Subtask 2. The winning team of Subtask 1 achieved F1 score of 0.8773, and the only team submitted in Subtask 2 achieved a 1.667 BLEU score.
|
[
"Malaysha, Sanad",
"El-Haj, Mo",
"Ezzini, Saad",
"Khalilia, Mohammed",
"Jarrar, Mustafa",
"Almujaiwel, Sultan",
"Berrada, Ismail",
"Bouamor, Houda"
] |
{A}ra{F}in{NLP} 2024: The First {A}rabic Financial {NLP} Shared Task
|
arabicnlp-1.34
|
Poster
|
2407.09818v1
|
https://aclanthology.org/2024.arabicnlp-1.35.bib
|
@inproceedings{hariri-abu-farha-2024-smash,
title = "{SMASH} at {A}ra{F}in{NLP}2024: Benchmarking {A}rabic {BERT} Models on the Intent Detection",
author = "Hariri, Youssef and
Abu Farha, Ibrahim",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.35",
pages = "403--409",
abstract = "The recent growth in Middle Eastern stock markets has intensified the demand for specialized financial Arabic NLP models to serve this sector. This article presents the participation of Team SMASH of The University of Edinburgh in the Multi-dialect Intent Detection task (Subtask 1) of the Arabic Financial NLP (AraFinNLP) Shared Task 2024. The dataset used in the shared task is the ArBanking77 (Jarrar et al., 2023). We tackled this task as a classification problem and utilized several BERT and BART-based models to classify the queries efficiently. Our solution is based on implementing a two-step hierarchical classification model based on MARBERTv2. We fine-tuned the model by using the original queries. Our team, SMASH, was ranked 9th with a macro F1 score of 0.7866, indicating areas for further refinement and potential enhancement of the model{'}s performance.",
}
|
The recent growth in Middle Eastern stock markets has intensified the demand for specialized financial Arabic NLP models to serve this sector. This article presents the participation of Team SMASH of The University of Edinburgh in the Multi-dialect Intent Detection task (Subtask 1) of the Arabic Financial NLP (AraFinNLP) Shared Task 2024. The dataset used in the shared task is the ArBanking77 (Jarrar et al., 2023). We tackled this task as a classification problem and utilized several BERT and BART-based models to classify the queries efficiently. Our solution is based on implementing a two-step hierarchical classification model based on MARBERTv2. We fine-tuned the model by using the original queries. Our team, SMASH, was ranked 9th with a macro F1 score of 0.7866, indicating areas for further refinement and potential enhancement of the model{'}s performance.
|
[
"Hariri, Youssef",
"Abu Farha, Ibrahim"
] |
{SMASH} at {A}ra{F}in{NLP}2024: Benchmarking {A}rabic {BERT} Models on the Intent Detection
|
arabicnlp-1.35
|
Poster
|
2405.16482v1
|
https://aclanthology.org/2024.arabicnlp-1.36.bib
|
@inproceedings{chowdhury-etal-2024-fired,
title = "{F}ired{\_}from{\_}{NLP} at {A}ra{F}in{NLP} 2024: Dual-Phase-{BERT} - A Fine-Tuned Transformer-Based Model for Multi-Dialect Intent Detection in The Financial Domain for The {A}rabic Language",
author = "Chowdhury, Md. and
Chowdhury, Mostak and
Shanto, Anik and
Murad, Hasan and
Das, Udoy",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.36",
pages = "410--414",
abstract = "In the financial industry, identifying user intent from text inputs is crucial for various tasks such as automated trading, sentiment analysis, and customer support. One important component of natural language processing (NLP) is intent detection, which is significant to the finance sector. Limited studies have been conducted in the field of finance using languages with limited resources like Arabic, despite notable works being done in high-resource languages like English. To advance Arabic NLP in the financial domain, the organizer of AraFinNLP 2024 has arranged a shared task for detecting banking intents from the queries in various Arabic dialects, introducing a novel dataset named ArBanking77 which includes a collection of banking queries categorized into 77 distinct intents classes. To accomplish this task, we have presented a hierarchical approach called Dual-Phase-BERT in which the detection of dialects is carried out first, followed by the detection of banking intents. Using the provided ArBanking77 dataset, we have trained and evaluated several conventional machine learning, and deep learning models along with some cutting-edge transformer-based models. Among these models, our proposed Dual-Phase-BERT model has ranked $7^{th}$ out of all competitors, scoring 0.801 on the scale of F1-score on the test set.",
}
|
In the financial industry, identifying user intent from text inputs is crucial for various tasks such as automated trading, sentiment analysis, and customer support. One important component of natural language processing (NLP) is intent detection, which is significant to the finance sector. Limited studies have been conducted in the field of finance using languages with limited resources like Arabic, despite notable works being done in high-resource languages like English. To advance Arabic NLP in the financial domain, the organizer of AraFinNLP 2024 has arranged a shared task for detecting banking intents from the queries in various Arabic dialects, introducing a novel dataset named ArBanking77 which includes a collection of banking queries categorized into 77 distinct intents classes. To accomplish this task, we have presented a hierarchical approach called Dual-Phase-BERT in which the detection of dialects is carried out first, followed by the detection of banking intents. Using the provided ArBanking77 dataset, we have trained and evaluated several conventional machine learning, and deep learning models along with some cutting-edge transformer-based models. Among these models, our proposed Dual-Phase-BERT model has ranked $7^{th}$ out of all competitors, scoring 0.801 on the scale of F1-score on the test set.
|
[
"Chowdhury, Md.",
"Chowdhury, Mostak",
"Shanto, Anik",
"Murad, Hasan",
"Das, Udoy"
] |
{F}ired{\_}from{\_}{NLP} at {A}ra{F}in{NLP} 2024: Dual-Phase-{BERT} - A Fine-Tuned Transformer-Based Model for Multi-Dialect Intent Detection in The Financial Domain for The {A}rabic Language
|
arabicnlp-1.36
|
Poster
|
2402.07448v1
|
https://aclanthology.org/2024.arabicnlp-1.37.bib
|
@inproceedings{elkordi-etal-2024-alexunlp24,
title = "{A}lexu{NLP}24 at {A}ra{F}in{NLP}2024: Multi-Dialect {A}rabic Intent Detection with Contrastive Learning in Banking Domain",
author = "Elkordi, Hossam and
Sakr, Ahmed and
Torki, Marwan and
El-Makky, Nagwa",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.37",
pages = "415--421",
abstract = "Arabic banking intent detection represents a challenging problem across multiple dialects. It imposes generalization difficulties due to the scarcity of Arabic language and its dialects resources compared to English. We propose a methodology that leverages contrastive training to overcome this limitation. We also augmented the data with several dialects using a translation model. Our experiments demonstrate the ability of our approach in capturing linguistic nuances across different Arabic dialects as well as accurately differentiating between banking intents across diverse linguistic landscapes. This would enhance multi-dialect banking services in the Arab world with limited Arabic language resources. Using our proposed method we achieved second place on subtask 1 leaderboard of the AraFinNLP2024 shared task with micro-F1 score of 0.8762 on the test split.",
}
|
Arabic banking intent detection represents a challenging problem across multiple dialects. It imposes generalization difficulties due to the scarcity of Arabic language and its dialects resources compared to English. We propose a methodology that leverages contrastive training to overcome this limitation. We also augmented the data with several dialects using a translation model. Our experiments demonstrate the ability of our approach in capturing linguistic nuances across different Arabic dialects as well as accurately differentiating between banking intents across diverse linguistic landscapes. This would enhance multi-dialect banking services in the Arab world with limited Arabic language resources. Using our proposed method we achieved second place on subtask 1 leaderboard of the AraFinNLP2024 shared task with micro-F1 score of 0.8762 on the test split.
|
[
"Elkordi, Hossam",
"Sakr, Ahmed",
"Torki, Marwan",
"El-Makky, Nagwa"
] |
{A}lexu{NLP}24 at {A}ra{F}in{NLP}2024: Multi-Dialect {A}rabic Intent Detection with Contrastive Learning in Banking Domain
|
arabicnlp-1.37
|
Poster
|
2405.16482v1
|
https://aclanthology.org/2024.arabicnlp-1.38.bib
|
@inproceedings{paran-etal-2024-semanticcuetsync,
title = "{S}emantic{C}uet{S}ync at {A}ra{F}in{NLP}2024: Classification of Cross-Dialect Intent in the Banking Domain using Transformers",
author = "Paran, Ashraful and
Shohan, Symom and
Hossain, Md. and
Hossain, Jawad and
Ahsan, Shawly and
Hoque, Mohammed Moshiul",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.38",
pages = "422--427",
abstract = "Intention detection is a crucial aspect of natural language understanding (NLU), focusing on identifying the primary objective underlying user input. In this work, we present a transformer-based method that excels in determining the intent of Arabic text within the banking domain. We explored several machine learning (ML), deep learning (DL), and transformer-based models on an Arabic banking dataset for intent detection. Our findings underscore the challenges that traditional ML and DL models face in understanding the nuances of various Arabic dialects, leading to subpar performance in intent detection. However, the transformer-based methods, designed to tackle such complexities, significantly outperformed the other models in classifying intent across different Arabic dialects. Notably, the AraBERTv2 model achieved the highest micro F1 score of 82.08{\%} in ArBanking77 dataset, a testament to its effectiveness in this context. This achievement, which contributed to our work being ranked 5$^{th}$ in the shared task, AraFinNLP2024, highlights the importance of developing models that can effectively handle the intricacies of Arabic language processing and intent detection.",
}
|
Intention detection is a crucial aspect of natural language understanding (NLU), focusing on identifying the primary objective underlying user input. In this work, we present a transformer-based method that excels in determining the intent of Arabic text within the banking domain. We explored several machine learning (ML), deep learning (DL), and transformer-based models on an Arabic banking dataset for intent detection. Our findings underscore the challenges that traditional ML and DL models face in understanding the nuances of various Arabic dialects, leading to subpar performance in intent detection. However, the transformer-based methods, designed to tackle such complexities, significantly outperformed the other models in classifying intent across different Arabic dialects. Notably, the AraBERTv2 model achieved the highest micro F1 score of 82.08{\%} in ArBanking77 dataset, a testament to its effectiveness in this context. This achievement, which contributed to our work being ranked 5$^{th}$ in the shared task, AraFinNLP2024, highlights the importance of developing models that can effectively handle the intricacies of Arabic language processing and intent detection.
|
[
"Paran, Ashraful",
"Shohan, Symom",
"Hossain, Md.",
"Hossain, Jawad",
"Ahsan, Shawly",
"Hoque, Mohammed Moshiul"
] |
{S}emantic{C}uet{S}ync at {A}ra{F}in{NLP}2024: Classification of Cross-Dialect Intent in the Banking Domain using Transformers
|
arabicnlp-1.38
|
Poster
|
2212.13015v1
|
https://aclanthology.org/2024.arabicnlp-1.39.bib
|
@inproceedings{nasr-ben-hajhmida-2024-senit,
title = "{SENIT} at {A}ra{F}in{NLP}2024: trust your model or combine two",
author = "Nasr, Abdelmomen and
Ben HajHmida, Moez",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.39",
pages = "428--432",
abstract = "We describe our submitted system to the 2024 Shared Task on The Arabic Financial NLP (Malaysha et al., 2024). We tackled Subtask 1, namely Multi-dialect Intent Detection. We used state-of-the-art pretrained contextualized text representation models and fine-tuned them according to the downstream task at hand. We started by finetuning multilingual BERT and various Arabic variants, namely MARBERTV1, MARBERTV2, and CAMeLBERT. Then, we employed an ensembling technique to improve our classification performance combining MARBERTV2 and CAMeLBERT embeddings. The findings indicate that MARBERTV2 surpassed all the other models mentioned.",
}
|
We describe our submitted system to the 2024 Shared Task on The Arabic Financial NLP (Malaysha et al., 2024). We tackled Subtask 1, namely Multi-dialect Intent Detection. We used state-of-the-art pretrained contextualized text representation models and fine-tuned them according to the downstream task at hand. We started by finetuning multilingual BERT and various Arabic variants, namely MARBERTV1, MARBERTV2, and CAMeLBERT. Then, we employed an ensembling technique to improve our classification performance combining MARBERTV2 and CAMeLBERT embeddings. The findings indicate that MARBERTV2 surpassed all the other models mentioned.
|
[
"Nasr, Abdelmomen",
"Ben HajHmida, Moez"
] |
{SENIT} at {A}ra{F}in{NLP}2024: trust your model or combine two
|
arabicnlp-1.39
|
Poster
|
1809.00052v1
|
https://aclanthology.org/2024.arabicnlp-1.40.bib
|
@inproceedings{fares-touileb-2024-babelbot,
title = "{B}abel{B}ot at {A}ra{F}in{NLP}2024: Fine-tuning T5 for Multi-dialect Intent Detection with Synthetic Data and Model Ensembling",
author = "Fares, Murhaf and
Touileb, Samia",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.40",
pages = "433--440",
abstract = "This paper presents our results for the Arabic Financial NLP (AraFinNLP) shared task at the Second Arabic Natural Language Processing Conference (ArabicNLP 2024). We participated in the first sub-task, Multi-dialect Intent Detection, which focused on cross-dialect intent detection in the banking domain. Our approach involved fine-tuning an encoder-only T5 model, generating synthetic data, and model ensembling. Additionally, we conducted an in-depth analysis of the dataset, addressing annotation errors and problematic translations. Our model was ranked third in the shared task, achieving a F1-score of 0.871.",
}
|
This paper presents our results for the Arabic Financial NLP (AraFinNLP) shared task at the Second Arabic Natural Language Processing Conference (ArabicNLP 2024). We participated in the first sub-task, Multi-dialect Intent Detection, which focused on cross-dialect intent detection in the banking domain. Our approach involved fine-tuning an encoder-only T5 model, generating synthetic data, and model ensembling. Additionally, we conducted an in-depth analysis of the dataset, addressing annotation errors and problematic translations. Our model was ranked third in the shared task, achieving a F1-score of 0.871.
|
[
"Fares, Murhaf",
"Touileb, Samia"
] |
{B}abel{B}ot at {A}ra{F}in{NLP}2024: Fine-tuning T5 for Multi-dialect Intent Detection with Synthetic Data and Model Ensembling
|
arabicnlp-1.40
|
Poster
|
2012.01721v2
|
https://aclanthology.org/2024.arabicnlp-1.41.bib
|
@inproceedings{ramadan-etal-2024-ma,
title = "{MA} at {A}ra{F}in{NLP}2024: {BERT}-based Ensemble for Cross-dialectal {A}rabic Intent Detection",
author = "Ramadan, Asmaa and
Amr, Manar and
Torki, Marwan and
El-Makky, Nagwa",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.41",
pages = "441--445",
abstract = "Intent detection, also called intent classification or recognition, is an NLP technique to comprehend the purpose behind user utterances. This paper focuses on Multi-dialect Arabic intent detection in banking, utilizing the ArBanking77 dataset. Our method employs an ensemble of fine-tuned BERT-based models, integrating contrastive loss for training. To enhance generalization to diverse Arabic dialects, we augment the ArBanking77 dataset, originally in Modern Standard Arabic (MSA) and Palestinian, with additional dialects such as Egyptian, Moroccan, and Saudi, among others. Our approach achieved an F1-score of 0.8771, ranking first in subtask-1 of the AraFinNLP shared task 2024.",
}
|
Intent detection, also called intent classification or recognition, is an NLP technique to comprehend the purpose behind user utterances. This paper focuses on Multi-dialect Arabic intent detection in banking, utilizing the ArBanking77 dataset. Our method employs an ensemble of fine-tuned BERT-based models, integrating contrastive loss for training. To enhance generalization to diverse Arabic dialects, we augment the ArBanking77 dataset, originally in Modern Standard Arabic (MSA) and Palestinian, with additional dialects such as Egyptian, Moroccan, and Saudi, among others. Our approach achieved an F1-score of 0.8771, ranking first in subtask-1 of the AraFinNLP shared task 2024.
|
[
"Ramadan, Asmaa",
"Amr, Manar",
"Torki, Marwan",
"El-Makky, Nagwa"
] |
{MA} at {A}ra{F}in{NLP}2024: {BERT}-based Ensemble for Cross-dialectal {A}rabic Intent Detection
|
arabicnlp-1.41
|
Poster
|
2310.19034v1
|
https://aclanthology.org/2024.arabicnlp-1.42.bib
|
@inproceedings{ashraf-etal-2024-bfci,
title = "{BFCI} at {A}ra{F}in{NLP}2024: Support Vector Machines for {A}rabic Financial Text Classification",
author = "Ashraf, Nsrin and
Nayel, Hamada and
Aldawsari, Mohammed and
Shashirekha, Hosahalli and
Elshishtawy, Tarek",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.42",
pages = "446--449",
abstract = "In this paper, a description of the system submitted by BFCAI team to the AraFinNLP2024 shared task has been introduced. Our team participated in the first subtask, which aims at detecting the customer intents of cross-dialectal Arabic queries in the banking domain. Our system follows the common pipeline of text classification models using primary classification algorithms integrated with basic vectorization approach for feature extraction. Multi-layer Perceptron, Stochastic Gradient Descent and Support Vector Machines algorithms have been implemented and support vector machines outperformed all other algorithms with an f-score of 49{\%}. Our submission{'}s result is appropriate compared to the simplicity of the proposed model{'}s structure.",
}
|
In this paper, a description of the system submitted by BFCAI team to the AraFinNLP2024 shared task has been introduced. Our team participated in the first subtask, which aims at detecting the customer intents of cross-dialectal Arabic queries in the banking domain. Our system follows the common pipeline of text classification models using primary classification algorithms integrated with basic vectorization approach for feature extraction. Multi-layer Perceptron, Stochastic Gradient Descent and Support Vector Machines algorithms have been implemented and support vector machines outperformed all other algorithms with an f-score of 49{\%}. Our submission{'}s result is appropriate compared to the simplicity of the proposed model{'}s structure.
|
[
"Ashraf, Nsrin",
"Nayel, Hamada",
"Aldawsari, Mohammed",
"Shashirekha, Hosahalli",
"Elshishtawy, Tarek"
] |
{BFCI} at {A}ra{F}in{NLP}2024: Support Vector Machines for {A}rabic Financial Text Classification
|
arabicnlp-1.42
|
Poster
|
1410.4863v1
|
https://aclanthology.org/2024.arabicnlp-1.43.bib
|
@inproceedings{lichouri-etal-2024-dzfinnlp,
title = "dz{F}in{N}lp at {A}ra{F}in{NLP}: Improving Intent Detection in Financial Conversational Agents",
author = "Lichouri, Mohamed and
Lounnas, Khaled and
Zakaria, Amziane",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.43",
pages = "450--455",
abstract = "In this paper, we present our dzFinNlp team{'}s contribution for intent detection in financial conversational agents, as part of the AraFinNLP shared task. We experimented with various models and feature configurations, including traditional machine learning methods like LinearSVC with TF-IDF, as well as deep learning models like Long Short-Term Memory (LSTM). Additionally, we explored the use of transformer-based models for this task. Our experiments show promising results, with our best model achieving a micro F1-score of 93.02{\%} and 67.21{\%} on the ArBanking77 dataset, in the development and test sets, respectively.",
}
|
In this paper, we present our dzFinNlp team{'}s contribution for intent detection in financial conversational agents, as part of the AraFinNLP shared task. We experimented with various models and feature configurations, including traditional machine learning methods like LinearSVC with TF-IDF, as well as deep learning models like Long Short-Term Memory (LSTM). Additionally, we explored the use of transformer-based models for this task. Our experiments show promising results, with our best model achieving a micro F1-score of 93.02{\%} and 67.21{\%} on the ArBanking77 dataset, in the development and test sets, respectively.
|
[
"Lichouri, Mohamed",
"Lounnas, Khaled",
"Zakaria, Amziane"
] |
dz{F}in{N}lp at {A}ra{F}in{NLP}: Improving Intent Detection in Financial Conversational Agents
|
arabicnlp-1.43
|
Poster
|
2407.13565v1
|
https://aclanthology.org/2024.arabicnlp-1.44.bib
|
@inproceedings{hasanain-etal-2024-araieval,
title = "{A}r{AIE}val Shared Task: Propagandistic Techniques Detection in Unimodal and Multimodal {A}rabic Content",
author = "Hasanain, Maram and
Hasan, Md. Arid and
Ahmad, Fatema and
Suwaileh, Reem and
Biswas, Md. Rafiul and
Zaghouani, Wajdi and
Alam, Firoj",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.44",
pages = "456--466",
abstract = "We present an overview of the second edition of the ArAIEval shared task, organized as part of the ArabicNLP 2024 conference co-located with ACL 2024. In this edition, ArAIEval offers two tasks: (i) detection of propagandistic textual spans with persuasion techniques identification in tweets and news articles, and (ii) distinguishing between propagandistic and non-propagandistic memes. A total of 14 teams participated in the final evaluation phase, with 6 and 9 teams participating in Tasks 1 and 2, respectively. Finally, 11 teams submitted system description papers. Across both tasks, we observed that fine-tuning transformer models such as AraBERT was at the core of the majority of the participating systems. We provide a description of the task setup, including a description of the dataset construction and the evaluation setup. We further provide a brief overview of the participating systems. All datasets and evaluation scripts are released to the research community. We hope this will enable further research on these important tasks in Arabic.",
}
|
We present an overview of the second edition of the ArAIEval shared task, organized as part of the ArabicNLP 2024 conference co-located with ACL 2024. In this edition, ArAIEval offers two tasks: (i) detection of propagandistic textual spans with persuasion techniques identification in tweets and news articles, and (ii) distinguishing between propagandistic and non-propagandistic memes. A total of 14 teams participated in the final evaluation phase, with 6 and 9 teams participating in Tasks 1 and 2, respectively. Finally, 11 teams submitted system description papers. Across both tasks, we observed that fine-tuning transformer models such as AraBERT was at the core of the majority of the participating systems. We provide a description of the task setup, including a description of the dataset construction and the evaluation setup. We further provide a brief overview of the participating systems. All datasets and evaluation scripts are released to the research community. We hope this will enable further research on these important tasks in Arabic.
|
[
"Hasanain, Maram",
"Hasan, Md. Arid",
"Ahmad, Fatema",
"Suwaileh, Reem",
"Biswas, Md. Rafiul",
"Zaghouani, Wajdi",
"Alam, Firoj"
] |
{A}r{AIE}val Shared Task: Propagandistic Techniques Detection in Unimodal and Multimodal {A}rabic Content
|
arabicnlp-1.44
|
Poster
|
2407.04247v1
|
https://aclanthology.org/2024.arabicnlp-1.45.bib
|
@inproceedings{shah-etal-2024-mememind,
title = "{M}eme{M}ind at {A}r{AIE}val Shared Task: Generative Augmentation and Feature Fusion for Multimodal Propaganda Detection in {A}rabic Memes through Advanced Language and Vision Models",
author = "Shah, Uzair and
Biswas, Md. Rafiul and
Agus, Marco and
Househ, Mowafa and
Zaghouani, Wajdi",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.45",
pages = "467--472",
abstract = "Detecting propaganda in multimodal content, such as memes, is crucial for combating disinformation on social media. This paper presents a novel approach for the ArAIEval 2024 shared Task 2 on Multimodal Propagandistic Memes Classification, involving text, image, and multimodal classification of Arabic memes. For text classification (Task 2A), we fine-tune state-of-the-art Arabic language models and use ChatGPT4-generated synthetic text for data augmentation. For image classification (Task 2B), we fine-tune ResNet18, EfficientFormerV2, and ConvNeXt-tiny architectures with DALL-E-2-generated synthetic images. For multimodal classification (Task 2C), we combine ConvNeXt-tiny and BERT architectures in a fusion layer to enhance binary classification. Our results show significant performance improvements with data augmentation for text and image classification models and with the fusion layer for multimodal classification. We highlight challenges and opportunities for future research in multimodal propaganda detection in Arabic content, emphasizing the need for robust and adaptable models to combat disinformation.",
}
|
Detecting propaganda in multimodal content, such as memes, is crucial for combating disinformation on social media. This paper presents a novel approach for the ArAIEval 2024 shared Task 2 on Multimodal Propagandistic Memes Classification, involving text, image, and multimodal classification of Arabic memes. For text classification (Task 2A), we fine-tune state-of-the-art Arabic language models and use ChatGPT4-generated synthetic text for data augmentation. For image classification (Task 2B), we fine-tune ResNet18, EfficientFormerV2, and ConvNeXt-tiny architectures with DALL-E-2-generated synthetic images. For multimodal classification (Task 2C), we combine ConvNeXt-tiny and BERT architectures in a fusion layer to enhance binary classification. Our results show significant performance improvements with data augmentation for text and image classification models and with the fusion layer for multimodal classification. We highlight challenges and opportunities for future research in multimodal propaganda detection in Arabic content, emphasizing the need for robust and adaptable models to combat disinformation.
|
[
"Shah, Uzair",
"Biswas, Md. Rafiul",
"Agus, Marco",
"Househ, Mowafa",
"Zaghouani, Wajdi"
] |
{M}eme{M}ind at {A}r{AIE}val Shared Task: Generative Augmentation and Feature Fusion for Multimodal Propaganda Detection in {A}rabic Memes through Advanced Language and Vision Models
|
arabicnlp-1.45
|
Poster
|
2408.04540v1
|
https://aclanthology.org/2024.arabicnlp-1.46.bib
|
@inproceedings{alhabashi-etal-2024-asos,
title = "{ASOS} at {A}r{AIE}val Shared Task: Integrating Text and Image Embeddings for Multimodal Propaganda Detection in {A}rabic Memes",
author = "Alhabashi, Yasser and
Alharbi, Abdullah and
Ahmad, Samar and
Sibaee, Serry and
Nacar, Omer and
Ghouti, Lahouari and
Koubaa, Anis",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.46",
pages = "473--477",
abstract = "This paper describes our participation in the ArAIEval Shared Task 2024, focusing on Task 2C, which challenges participants to detect propagandistic elements in multimodal Arabic memes. The challenge involves analyzing both the textual and visual components of memes to identify underlying propagandistic messages. Our approach integrates the capabilities of MARBERT and ResNet50, top-performing pre-trained models for text and image processing, respectively. Our system architecture combines these models through a fusion layer that integrates and processes the extracted features, creating a comprehensive representation that is more effective in detecting nuanced propaganda. Our proposed system achieved significant success, placing second with an F1 score of 0.7987.",
}
|
This paper describes our participation in the ArAIEval Shared Task 2024, focusing on Task 2C, which challenges participants to detect propagandistic elements in multimodal Arabic memes. The challenge involves analyzing both the textual and visual components of memes to identify underlying propagandistic messages. Our approach integrates the capabilities of MARBERT and ResNet50, top-performing pre-trained models for text and image processing, respectively. Our system architecture combines these models through a fusion layer that integrates and processes the extracted features, creating a comprehensive representation that is more effective in detecting nuanced propaganda. Our proposed system achieved significant success, placing second with an F1 score of 0.7987.
|
[
"Alhabashi, Yasser",
"Alharbi, Abdullah",
"Ahmad, Samar",
"Sibaee, Serry",
"Nacar, Omer",
"Ghouti, Lahouari",
"Koubaa, Anis"
] |
{ASOS} at {A}r{AIE}val Shared Task: Integrating Text and Image Embeddings for Multimodal Propaganda Detection in {A}rabic Memes
|
arabicnlp-1.46
|
Poster
|
2407.01360v1
|
https://aclanthology.org/2024.arabicnlp-1.47.bib
|
@inproceedings{riyadh-nabhani-2024-mela,
title = "Mela at {A}r{AIE}val Shared Task: Propagandistic Techniques Detection in {A}rabic with a Multilingual Approach",
author = "Riyadh, Md and
Nabhani, Sara",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.47",
pages = "478--482",
abstract = "This paper presents our system submitted for Task 1 of the ArAIEval Shared Task on Unimodal (Text) Propagandistic Technique Detection in Arabic. Task 1 involves identifying all employed propaganda techniques in a given text from a set of possible techniques or detecting that no propaganda technique is present. Additionally, the task requires identifying the specific spans of text where these techniques occur. We explored the capabilities of a multilingual BERT model for this task, focusing on the effectiveness of using outputs from different hidden layers within the model. By fine-tuning the multilingual BERT, we aimed to improve the model{'}s ability to recognize and locate various propaganda techniques. Our experiments showed that leveraging the hidden layers of the BERT model enhanced detection performance. Our system achieved competitive results, ranking second in the shared task, demonstrating that multilingual BERT models, combined with outputs from hidden layers, can effectively detect and identify spans of propaganda techniques in Arabic text.",
}
|
This paper presents our system submitted for Task 1 of the ArAIEval Shared Task on Unimodal (Text) Propagandistic Technique Detection in Arabic. Task 1 involves identifying all employed propaganda techniques in a given text from a set of possible techniques or detecting that no propaganda technique is present. Additionally, the task requires identifying the specific spans of text where these techniques occur. We explored the capabilities of a multilingual BERT model for this task, focusing on the effectiveness of using outputs from different hidden layers within the model. By fine-tuning the multilingual BERT, we aimed to improve the model{'}s ability to recognize and locate various propaganda techniques. Our experiments showed that leveraging the hidden layers of the BERT model enhanced detection performance. Our system achieved competitive results, ranking second in the shared task, demonstrating that multilingual BERT models, combined with outputs from hidden layers, can effectively detect and identify spans of propaganda techniques in Arabic text.
|
[
"Riyadh, Md",
"Nabhani, Sara"
] |
Mela at {A}r{AIE}val Shared Task: Propagandistic Techniques Detection in {A}rabic with a Multilingual Approach
|
arabicnlp-1.47
|
Poster
|
2407.01360v1
|
https://aclanthology.org/2024.arabicnlp-1.48.bib
|
@inproceedings{haouhat-etal-2024-modos,
title = "{MODOS} at {A}r{AIE}val Shared Task: Multimodal Propagandistic Memes Classification Using Weighted {SAM}, {CLIP} and {A}rabian{GPT}",
author = "Haouhat, Abdelhamid and
Cherroun, Hadda and
Bellaouar, Slimane and
Nehar, Attia",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.48",
pages = "483--488",
abstract = "Arabic social media platforms are increasingly using propaganda to deceive or influence people. This propaganda is often spread through multimodal content, such as memes. While substantial research has addressed the automatic detection of propaganda in English content, this paper presents the MODOS team{'}s participation in the Arabic Multimodal Propagandistic Memes Classification shared task. Our system deploys the Segment Anything Model (SAM) and CLIP for image representation and ARABIAN-GPT embeddings for text. Then, we employ LSTM encoders followed by a weighted fusion strategy to perform binary classification. Our system achieved competitive performance in distinguishing between propagandistic and non-propagandistic memes, scored 0.7290 macro F1, and ranked 6th among the participants.",
}
|
Arabic social media platforms are increasingly using propaganda to deceive or influence people. This propaganda is often spread through multimodal content, such as memes. While substantial research has addressed the automatic detection of propaganda in English content, this paper presents the MODOS team{'}s participation in the Arabic Multimodal Propagandistic Memes Classification shared task. Our system deploys the Segment Anything Model (SAM) and CLIP for image representation and ARABIAN-GPT embeddings for text. Then, we employ LSTM encoders followed by a weighted fusion strategy to perform binary classification. Our system achieved competitive performance in distinguishing between propagandistic and non-propagandistic memes, scored 0.7290 macro F1, and ranked 6th among the participants.
|
[
"Haouhat, Abdelhamid",
"Cherroun, Hadda",
"Bellaouar, Slimane",
"Nehar, Attia"
] |
{MODOS} at {A}r{AIE}val Shared Task: Multimodal Propagandistic Memes Classification Using Weighted {SAM}, {CLIP} and {A}rabian{GPT}
|
arabicnlp-1.48
|
Poster
|
2407.04247v1
|
https://aclanthology.org/2024.arabicnlp-1.49.bib
|
@inproceedings{abir-oflazer-2024-nullpointer,
title = "Nullpointer at {A}r{AIE}val Shared Task: {A}rabic Propagandist Technique Detection with Token-to-Word Mapping in Sequence Tagging",
author = "Abir, Abrar and
Oflazer, Kemal",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.49",
pages = "489--493",
abstract = "This paper investigates the optimization of propaganda technique detection in Arabic text, including tweets {\&} news paragraphs, from ArAIEval shared task 1. Our approach involves fine-tuning the AraBERT v2 model with a neural network classifier for sequence tagging.Experimental results show relying on the first token of the word for technique prediction produces the best performance. In addition, incorporating genre information as a feature further enhances the model{'}s performance. Our system achieved a score of 25.41, placing us 4th on the leaderboard. Subsequent post-submission improvements further raised our score to 26.68.",
}
|
This paper investigates the optimization of propaganda technique detection in Arabic text, including tweets {\&} news paragraphs, from ArAIEval shared task 1. Our approach involves fine-tuning the AraBERT v2 model with a neural network classifier for sequence tagging.Experimental results show relying on the first token of the word for technique prediction produces the best performance. In addition, incorporating genre information as a feature further enhances the model{'}s performance. Our system achieved a score of 25.41, placing us 4th on the leaderboard. Subsequent post-submission improvements further raised our score to 26.68.
|
[
"Abir, Abrar",
"Oflazer, Kemal"
] |
Nullpointer at {A}r{AIE}val Shared Task: {A}rabic Propagandist Technique Detection with Token-to-Word Mapping in Sequence Tagging
|
arabicnlp-1.49
|
Poster
|
2407.01360v1
|
https://aclanthology.org/2024.arabicnlp-1.50.bib
|
@inproceedings{biswas-etal-2024-mememind,
title = "{M}eme{M}ind at {A}r{AIE}val Shared Task: Spotting Persuasive Spans in {A}rabic Text with Persuasion Techniques Identification",
author = "Biswas, Md. Rafiul and
Shah, Zubair and
Zaghouani, Wajdi",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.50",
pages = "494--500",
abstract = "This paper focuses on detecting propagandistic spans and persuasion techniques in Arabic text from tweets and news paragraphs. Each entry in the dataset contains a text sample and corresponding labels that indicate the start and end positions of propaganda techniques within the text. Tokens falling within a labeled span were assigned {'}B{'} (Begin) or {'}I{'} (Inside) tags, {'}O{'}, corresponding to the specific propaganda technique. Using attention masks, we created uniform lengths for each span and assigned BIO tags to each token based on the provided labels. Then, we used AraBERT-base pre-trained model for Arabic text tokenization and embeddings with a token classification layer to identify propaganda techniques. Our training process involves a two-phase fine-tuning approach. First, we train only the classification layer for a few epochs, followed by full model fine-tuning, updating all parameters. This methodology allows the model to adapt to the specific characteristics of the propaganda detection task while leveraging the knowledge captured by the pretrained AraBERT model. Our approach achieved an F1 score of 0.2774, securing the 3rd position in the leaderboard of Task 1.",
}
|
This paper focuses on detecting propagandistic spans and persuasion techniques in Arabic text from tweets and news paragraphs. Each entry in the dataset contains a text sample and corresponding labels that indicate the start and end positions of propaganda techniques within the text. Tokens falling within a labeled span were assigned {'}B{'} (Begin) or {'}I{'} (Inside) tags, {'}O{'}, corresponding to the specific propaganda technique. Using attention masks, we created uniform lengths for each span and assigned BIO tags to each token based on the provided labels. Then, we used AraBERT-base pre-trained model for Arabic text tokenization and embeddings with a token classification layer to identify propaganda techniques. Our training process involves a two-phase fine-tuning approach. First, we train only the classification layer for a few epochs, followed by full model fine-tuning, updating all parameters. This methodology allows the model to adapt to the specific characteristics of the propaganda detection task while leveraging the knowledge captured by the pretrained AraBERT model. Our approach achieved an F1 score of 0.2774, securing the 3rd position in the leaderboard of Task 1.
|
[
"Biswas, Md. Rafiul",
"Shah, Zubair",
"Zaghouani, Wajdi"
] |
{M}eme{M}ind at {A}r{AIE}val Shared Task: Spotting Persuasive Spans in {A}rabic Text with Persuasion Techniques Identification
|
arabicnlp-1.50
|
Poster
|
2408.04540v1
|
https://aclanthology.org/2024.arabicnlp-1.51.bib
|
@inproceedings{wang-markov-2024-cltl-araieval,
title = "{CLTL} at {A}r{AIE}val Shared Task: Multimodal Propagandistic Memes Classification Using Transformer Models",
author = "Wang, Yeshan and
Markov, Ilia",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.51",
pages = "501--506",
abstract = "We present the CLTL system designed for the ArAIEval Shared Task 2024 on multimodal propagandistic memes classification in Arabic. The challenge was divided into three subtasks: identifying propagandistic content from textual modality of memes (subtask 2A), from visual modality of memes (subtask 2B), and in a multimodal scenario when both modalities are combined (subtask 2C). We explored various unimodal transformer models for Arabic language processing (subtask 2A), visual models for image processing (subtask 2B), and concatenated text and image embeddings using the Multilayer Perceptron fusion module for multimodal propagandistic memes classification (subtask 2C). Our system achieved 77.96{\%} for subtask 2A, 71.04{\%} for subtask 2B, and 79.80{\%} for subtask 2C, ranking 2nd, 1st, and 3rd on the leaderboard.",
}
|
We present the CLTL system designed for the ArAIEval Shared Task 2024 on multimodal propagandistic memes classification in Arabic. The challenge was divided into three subtasks: identifying propagandistic content from textual modality of memes (subtask 2A), from visual modality of memes (subtask 2B), and in a multimodal scenario when both modalities are combined (subtask 2C). We explored various unimodal transformer models for Arabic language processing (subtask 2A), visual models for image processing (subtask 2B), and concatenated text and image embeddings using the Multilayer Perceptron fusion module for multimodal propagandistic memes classification (subtask 2C). Our system achieved 77.96{\%} for subtask 2A, 71.04{\%} for subtask 2B, and 79.80{\%} for subtask 2C, ranking 2nd, 1st, and 3rd on the leaderboard.
|
[
"Wang, Yeshan",
"Markov, Ilia"
] |
{CLTL} at {A}r{AIE}val Shared Task: Multimodal Propagandistic Memes Classification Using Transformer Models
|
arabicnlp-1.51
|
Poster
|
2407.04247v1
|
https://aclanthology.org/2024.arabicnlp-1.52.bib
|
@inproceedings{labib-etal-2024-cuet,
title = "{CUET}{\_}sstm at {A}r{AIE}val Shared Task: Unimodal (Text) Propagandistic Technique Detection Using Transformer-Based Model",
author = "Labib, Momtazul and
Rahman, Samia and
Murad, Hasan and
Das, Udoy",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.52",
pages = "507--511",
abstract = "In recent days, propaganda has started to influence public opinion increasingly as social media usage continues to grow. Our research has been part of the first challenge, Unimodal (Text) Propagandistic Technique Detection of ArAIEval shared task at the ArabicNLP 2024 conference, co-located with ACL 2024, identifying specific Arabic text spans using twenty-three propaganda techniques. We have augmented underrepresented techniques in the provided dataset using synonym replacement and have evaluated various machine learning (RF, SVM, MNB), deep learning (BiLSTM), and transformer-based models (bert-base-arabic, Marefa-NER, AraBERT) with transfer learning. Our comparative study has shown that the transformer model {``}bert-base-arabic{''} has outperformed other models. Evaluating the test set, it has achieved the micro-F1 score of 0.2995 which is the highest. This result has secured our team {``}CUET{\_}sstm{''} first place among all participants in task 1 of the ArAIEval.",
}
|
In recent days, propaganda has started to influence public opinion increasingly as social media usage continues to grow. Our research has been part of the first challenge, Unimodal (Text) Propagandistic Technique Detection of ArAIEval shared task at the ArabicNLP 2024 conference, co-located with ACL 2024, identifying specific Arabic text spans using twenty-three propaganda techniques. We have augmented underrepresented techniques in the provided dataset using synonym replacement and have evaluated various machine learning (RF, SVM, MNB), deep learning (BiLSTM), and transformer-based models (bert-base-arabic, Marefa-NER, AraBERT) with transfer learning. Our comparative study has shown that the transformer model {``}bert-base-arabic{''} has outperformed other models. Evaluating the test set, it has achieved the micro-F1 score of 0.2995 which is the highest. This result has secured our team {``}CUET{\_}sstm{''} first place among all participants in task 1 of the ArAIEval.
|
[
"Labib, Momtazul",
"Rahman, Samia",
"Murad, Hasan",
"Das, Udoy"
] |
{CUET}{\_}sstm at {A}r{AIE}val Shared Task: Unimodal (Text) Propagandistic Technique Detection Using Transformer-Based Model
|
arabicnlp-1.52
|
Poster
|
2407.01360v1
|
https://aclanthology.org/2024.arabicnlp-1.53.bib
|
@inproceedings{zaytoon-etal-2024-alexunlp,
title = "{A}lex{UNLP}-{MZ} at {A}r{AIE}val Shared Task: Contrastive Learning, {LLM} Features Extraction and Multi-Objective Optimization for {A}rabic Multi-Modal Meme Propaganda Detection",
author = "Zaytoon, Mohamed and
El-Makky, Nagwa and
Torki, Marwan",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.53",
pages = "512--517",
abstract = "The rise of memes as a tool for spreading propaganda presents a significant challenge in the current digital environment. In this paper, we outline our work for the ArAIEval Shared Task2 in ArabicNLP 2024. This study introduces a method for identifying propaganda in Arabic memes using a multimodal system that combines textual and visual indicators to enhance the result. Our approach achieves the first place in text classification with Macro-F1 of 78.69{\%}, the third place in image classification with Macro-F1 of 65.92{\%}, and the first place in multimodal classification with Macro-F1 of 80.51{\%}",
}
|
The rise of memes as a tool for spreading propaganda presents a significant challenge in the current digital environment. In this paper, we outline our work for the ArAIEval Shared Task2 in ArabicNLP 2024. This study introduces a method for identifying propaganda in Arabic memes using a multimodal system that combines textual and visual indicators to enhance the result. Our approach achieves the first place in text classification with Macro-F1 of 78.69{\%}, the third place in image classification with Macro-F1 of 65.92{\%}, and the first place in multimodal classification with Macro-F1 of 80.51{\%}
|
[
"Zaytoon, Mohamed",
"El-Makky, Nagwa",
"Torki, Marwan"
] |
{A}lex{UNLP}-{MZ} at {A}r{AIE}val Shared Task: Contrastive Learning, {LLM} Features Extraction and Multi-Objective Optimization for {A}rabic Multi-Modal Meme Propaganda Detection
|
arabicnlp-1.53
|
Poster
|
2407.01360v1
|
https://aclanthology.org/2024.arabicnlp-1.54.bib
|
@inproceedings{shohan-etal-2024-semanticcuetsync,
title = "{S}emantic{C}uet{S}ync at {A}r{AIE}val Shared Task: Detecting Propagandistic Spans with Persuasion Techniques Identification using Pre-trained Transformers",
author = "Shohan, Symom and
Hossain, Md. and
Paran, Ashraful and
Ahsan, Shawly and
Hossain, Jawad and
Hoque, Mohammed Moshiul",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.54",
pages = "518--523",
abstract = "Detecting propagandistic spans and identifying persuasion techniques are crucial for promoting informed decision-making, safeguarding democratic processes, and fostering a media environment characterized by integrity and transparency. Various machine learning (Logistic Regression, Random Forest, and Multinomial Naive Bayes), deep learning (CNN, CNN+LSTM, CNN+BiLSTM), and transformer-based (AraBERTv2, AraBERT-NER, CamelBERT, BERT-Base-Arabic) models were exploited to perform the task. The evaluation results indicate that CamelBERT achieved the highest micro-F1 score (24.09{\%}), outperforming CNN+LSTM and AraBERTv2. The study found that most models struggle to detect propagandistic spans when multiple spans are present within the same article. Overall, the model{'}s performance secured a $6^{th}$ place ranking in the ArAIEval Shared Task-1.",
}
|
Detecting propagandistic spans and identifying persuasion techniques are crucial for promoting informed decision-making, safeguarding democratic processes, and fostering a media environment characterized by integrity and transparency. Various machine learning (Logistic Regression, Random Forest, and Multinomial Naive Bayes), deep learning (CNN, CNN+LSTM, CNN+BiLSTM), and transformer-based (AraBERTv2, AraBERT-NER, CamelBERT, BERT-Base-Arabic) models were exploited to perform the task. The evaluation results indicate that CamelBERT achieved the highest micro-F1 score (24.09{\%}), outperforming CNN+LSTM and AraBERTv2. The study found that most models struggle to detect propagandistic spans when multiple spans are present within the same article. Overall, the model{'}s performance secured a $6^{th}$ place ranking in the ArAIEval Shared Task-1.
|
[
"Shohan, Symom",
"Hossain, Md.",
"Paran, Ashraful",
"Ahsan, Shawly",
"Hossain, Jawad",
"Hoque, Mohammed Moshiul"
] |
{S}emantic{C}uet{S}ync at {A}r{AIE}val Shared Task: Detecting Propagandistic Spans with Persuasion Techniques Identification using Pre-trained Transformers
|
arabicnlp-1.54
|
Poster
|
2407.04247v1
|
https://aclanthology.org/2024.arabicnlp-1.55.bib
|
@inproceedings{fouad-weeds-2024-sussexai,
title = "{S}ussex{AI} at {A}r{AIE}val Shared Task: Mitigating Class Imbalance in {A}rabic Propaganda Detection",
author = "Fouad, Mary and
Weeds, Julie",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.55",
pages = "524--529",
abstract = "In this paper, we are exploring mitigating class imbalancein Arabic propaganda detection. Given amultigenre text which could be a news paragraphor a tweet, the objective is to identify the propagandatechnique employed in the text along withthe exact span(s) where each technique occurs. Weapproach this task as a sequence tagging task. Weutilise AraBERT for sequence classification andimplement data augmentation and random truncationmethods to mitigate the class imbalance withinthe dataset. We demonstrate the importance ofconsidering macro-F1 as well as micro-F1 whenevaluating classifier performance in this scenario.",
}
|
In this paper, we are exploring mitigating class imbalancein Arabic propaganda detection. Given amultigenre text which could be a news paragraphor a tweet, the objective is to identify the propagandatechnique employed in the text along withthe exact span(s) where each technique occurs. Weapproach this task as a sequence tagging task. Weutilise AraBERT for sequence classification andimplement data augmentation and random truncationmethods to mitigate the class imbalance withinthe dataset. We demonstrate the importance ofconsidering macro-F1 as well as micro-F1 whenevaluating classifier performance in this scenario.
|
[
"Fouad, Mary",
"Weeds, Julie"
] |
{S}ussex{AI} at {A}r{AIE}val Shared Task: Mitigating Class Imbalance in {A}rabic Propaganda Detection
|
arabicnlp-1.55
|
Poster
|
2407.01360v1
|
https://aclanthology.org/2024.arabicnlp-1.56.bib
|
@inproceedings{zaghouani-etal-2024-fignews,
title = "The {FIGNEWS} Shared Task on News Media Narratives",
author = "Zaghouani, Wajdi and
Jarrar, Mustafa and
Habash, Nizar and
Bouamor, Houda and
Zitouni, Imed and
Diab, Mona and
El-Beltagy, Samhaa and
AbuOdeh, Muhammed",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.56",
pages = "530--547",
abstract = "We present an overview of the FIGNEWSshared task, organized as part of the Arabic-NLP 2024 conference co-located with ACL2024. The shared task addresses bias and pro-paganda annotation in multilingual news posts.We focus on the early days of the Israel War onGaza as a case study. The task aims to fostercollaboration in developing annotation guide-lines for subjective tasks by creating frame-works for analyzing diverse narratives high-lighting potential bias and propaganda. In aspirit of fostering and encouraging diversity,we address the problem from a multilingualperspective, namely within five languages: En-glish, French, Arabic, Hebrew, and Hindi. Atotal of 17 teams participated in two annota-tion subtasks: bias (16 teams) and propaganda(6 teams). The teams competed in four evalua-tion tracks: guidelines development, annotationquality, annotation quantity, and consistency.Collectively, the teams produced 129,800 datapoints. Key findings and implications for thefield are discussed.",
}
|
We present an overview of the FIGNEWSshared task, organized as part of the Arabic-NLP 2024 conference co-located with ACL2024. The shared task addresses bias and pro-paganda annotation in multilingual news posts.We focus on the early days of the Israel War onGaza as a case study. The task aims to fostercollaboration in developing annotation guide-lines for subjective tasks by creating frame-works for analyzing diverse narratives high-lighting potential bias and propaganda. In aspirit of fostering and encouraging diversity,we address the problem from a multilingualperspective, namely within five languages: En-glish, French, Arabic, Hebrew, and Hindi. Atotal of 17 teams participated in two annota-tion subtasks: bias (16 teams) and propaganda(6 teams). The teams competed in four evalua-tion tracks: guidelines development, annotationquality, annotation quantity, and consistency.Collectively, the teams produced 129,800 datapoints. Key findings and implications for thefield are discussed.
|
[
"Zaghouani, Wajdi",
"Jarrar, Mustafa",
"Habash, Nizar",
"Bouamor, Houda",
"Zitouni, Imed",
"Diab, Mona",
"El-Beltagy, Samhaa",
"AbuOdeh, Muhammed"
] |
The {FIGNEWS} Shared Task on News Media Narratives
|
arabicnlp-1.56
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.57.bib
|
@inproceedings{alemadi-etal-2024-narrative,
title = "Narrative Navigators at {FIGNEWS} 2024 Shared Task: New Frontiers in Bias and Propaganda Annotation Techniques",
author = "AlEmadi, Maryam and
ElMesselmani, Jana and
Bermak, Lyna and
Abdullah, Goumana and
Sharqawi, Esra{'}a and
Jrad, Anissa and
Zouabi, Zied and
Zaghouani, Wajdi",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.57",
pages = "548--554",
abstract = "This paper presents our team{'}s contribution to the FIGNEWS 2024 Shared Task, which involved annotating bias and propaganda in news coverage of the Israel-Palestine conflict. We developed comprehensive guidelines and employed a rigorous methodology to analyze 2,200 news posts from several official Facebook accounts of news websites in multiple languages. Our team, Narrative Navigators, achieved third place in both the Bias Guidelines and Bias Consistency tracks, demonstrating the effectiveness of our approach. We achieved an IAA Kappa score of 39.4 for bias annotation and 12.8 for propaganda detection. These findings and our performance underscore the need for enhanced media literacy and further research to counter the impact of biased and misleading information on public understanding of the conflict.",
}
|
This paper presents our team{'}s contribution to the FIGNEWS 2024 Shared Task, which involved annotating bias and propaganda in news coverage of the Israel-Palestine conflict. We developed comprehensive guidelines and employed a rigorous methodology to analyze 2,200 news posts from several official Facebook accounts of news websites in multiple languages. Our team, Narrative Navigators, achieved third place in both the Bias Guidelines and Bias Consistency tracks, demonstrating the effectiveness of our approach. We achieved an IAA Kappa score of 39.4 for bias annotation and 12.8 for propaganda detection. These findings and our performance underscore the need for enhanced media literacy and further research to counter the impact of biased and misleading information on public understanding of the conflict.
|
[
"AlEmadi, Maryam",
"ElMesselmani, Jana",
"Bermak, Lyna",
"Abdullah, Goumana",
"Sharqawi, Esra{'}a",
"Jrad, Anissa",
"Zouabi, Zied",
"Zaghouani, Wajdi"
] |
Narrative Navigators at {FIGNEWS} 2024 Shared Task: New Frontiers in Bias and Propaganda Annotation Techniques
|
arabicnlp-1.57
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.58.bib
|
@inproceedings{jafari-etal-2024-dragon,
title = "{DRAGON} at {FIGNEWS} 2024 Shared Task: a Dedicated {RAG} for {O}ctober 7th conflict News",
author = "Jafari, Sadegh and
Mahmoodzadeh, Mohsen and
Nazari, Vanooshe and
Bahmanyar, Razieh and
Burrows, Kathryn",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.58",
pages = "555--560",
abstract = "In this study, we present a novel approach to annotating bias and propaganda in social media data by leveraging topic modeling techniques. Utilizing the BERTopic tool, we performed topic modeling on the FIGNEWS Shared-task dataset, which initially comprised 13,500 samples. From this dataset, we identified 35 distinct topics and selected approximately 50 representative samples from each topic, resulting in a subset of 1,812 samples. These selected samples were meticulously annotated for bias and propaganda labels. Subsequently, we employed multiple methods like KNN, SVC, XGBoost, and RAG to develop a classifier capable of detecting bias and propaganda within social media content. Our approach demonstrates the efficacy of using topic modeling for efficient data subset selection and provides a robust foundation for improving the accuracy of bias and propaganda detection in large-scale social media datasets.",
}
|
In this study, we present a novel approach to annotating bias and propaganda in social media data by leveraging topic modeling techniques. Utilizing the BERTopic tool, we performed topic modeling on the FIGNEWS Shared-task dataset, which initially comprised 13,500 samples. From this dataset, we identified 35 distinct topics and selected approximately 50 representative samples from each topic, resulting in a subset of 1,812 samples. These selected samples were meticulously annotated for bias and propaganda labels. Subsequently, we employed multiple methods like KNN, SVC, XGBoost, and RAG to develop a classifier capable of detecting bias and propaganda within social media content. Our approach demonstrates the efficacy of using topic modeling for efficient data subset selection and provides a robust foundation for improving the accuracy of bias and propaganda detection in large-scale social media datasets.
|
[
"Jafari, Sadegh",
"Mahmoodzadeh, Mohsen",
"Nazari, Vanooshe",
"Bahmanyar, Razieh",
"Burrows, Kathryn"
] |
{DRAGON} at {FIGNEWS} 2024 Shared Task: a Dedicated {RAG} for {O}ctober 7th conflict News
|
arabicnlp-1.58
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.59.bib
|
@inproceedings{el-ghawi-etal-2024-lexiconladies,
title = "{L}exicon{L}adies at {FIGNEWS} 2024 Shared Task: Identifying Keywords for Bias Annotation Guidelines of {F}acebook News Headlines on the {I}srael-{P}alestine 2023 War",
author = "El-Ghawi, Yousra and
Marzouk, Abeer and
Khamis, Aya",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.59",
pages = "561--566",
abstract = "News bias is difficult for humans to identify, but even more so for machines. This is largely due to the lack of linguistically appropriate annotated datasets suitable for use by classifier algorithms. The FIGNEWS Subtask 1: Bias Annotation involved classifying bias through manually annotated 1800 headlines from social media. Our proposed guidelines investigated which combinations of keywords available for classification, across sentence and token levels, may be used to detect possible bias in a conflict where neutrality is highly undesirable. Much of the headlines{'} percentage required contextual knowledge of events to identify criteria that matched biased or targeted language. The final annotation guidelines paved the way for a theoretical system which uses keyword and hashtag significance to classify major instances of bias. Minor instances with bias undertones or clickbait may require advanced machine learning methods which learn context through scraping user engagements on social media.",
}
|
News bias is difficult for humans to identify, but even more so for machines. This is largely due to the lack of linguistically appropriate annotated datasets suitable for use by classifier algorithms. The FIGNEWS Subtask 1: Bias Annotation involved classifying bias through manually annotated 1800 headlines from social media. Our proposed guidelines investigated which combinations of keywords available for classification, across sentence and token levels, may be used to detect possible bias in a conflict where neutrality is highly undesirable. Much of the headlines{'} percentage required contextual knowledge of events to identify criteria that matched biased or targeted language. The final annotation guidelines paved the way for a theoretical system which uses keyword and hashtag significance to classify major instances of bias. Minor instances with bias undertones or clickbait may require advanced machine learning methods which learn context through scraping user engagements on social media.
|
[
"El-Ghawi, Yousra",
"Marzouk, Abeer",
"Khamis, Aya"
] |
{L}exicon{L}adies at {FIGNEWS} 2024 Shared Task: Identifying Keywords for Bias Annotation Guidelines of {F}acebook News Headlines on the {I}srael-{P}alestine 2023 War
|
arabicnlp-1.59
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.60.bib
|
@inproceedings{nwesri-etal-2024-uot1,
title = "Uot1 at {FIGNEWS} 2024 Shared Task: Labeling News Bias",
author = "Nwesri, Abdusalam and
Elbaabaa, Mai and
Lashihar, Fatima and
Alalos, Fatma",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.60",
pages = "567--572",
abstract = "This paper outlines the University of Tripoli{'}s initiative in creating annotation guidelines to detect bias in news articles concerning the Palestinian-Israeli conflict. Our team participated in the Framing of Israeli Gaza News Media Narrative (FIGNEWS 2024) shared task. We developed annotation guidelines to label bias in news articles. Using those guidelines we managed to annotate 3,900 articles with the aid of our custom-developed annotation tool. Among 16 participating teams, we scored 48.7 on the macro F1 measure in the quality track in which we ranked 4th. In the centrality track we were ranked at the 6th position using the macro F1 avg measure, however, we achieved the 4th best kappa coefficient. Our bias annotation guidelines was ranked in the 9th position.",
}
|
This paper outlines the University of Tripoli{'}s initiative in creating annotation guidelines to detect bias in news articles concerning the Palestinian-Israeli conflict. Our team participated in the Framing of Israeli Gaza News Media Narrative (FIGNEWS 2024) shared task. We developed annotation guidelines to label bias in news articles. Using those guidelines we managed to annotate 3,900 articles with the aid of our custom-developed annotation tool. Among 16 participating teams, we scored 48.7 on the macro F1 measure in the quality track in which we ranked 4th. In the centrality track we were ranked at the 6th position using the macro F1 avg measure, however, we achieved the 4th best kappa coefficient. Our bias annotation guidelines was ranked in the 9th position.
|
[
"Nwesri, Abdusalam",
"Elbaabaa, Mai",
"Lashihar, Fatima",
"Alalos, Fatma"
] |
Uot1 at {FIGNEWS} 2024 Shared Task: Labeling News Bias
|
arabicnlp-1.60
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.61.bib
|
@inproceedings{abdul-rauf-etal-2024-nlpcolab,
title = "{NLPC}olab at {F}ig{N}ews 2024 Shared Task: Challenges in Bias and Propaganda Annotation for News Media",
author = "Abdul Rauf, Sadaf and
Sarfraz, Huda and
Nauman, Saadia and
Fatima, Arooj and
SadafZiafat, SadafZiafat and
Ishfaq, Momina and
Suboor, Alishba and
Afzal, Hammad and
Latif, Seemab",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.61",
pages = "573--579",
abstract = "In this paper, we present our methodology and findings from participating in the FIGNEWS 2024 shared task on annotating news fragments on the Gaza-Israel war for bias and propaganda detection. The task aimed to refine the FIGNEWS 2024 annotation guidelines and to contribute to the creation of a comprehensive dataset to advance research in this field. Our team employed a multi-faceted approach to ensure high accuracy in data annotations. Our results highlight key challenges in detecting bias and propaganda, such as the need for more comprehensive guidelines. Our team ranked first in all tracks for propaganda annotation. For Bias, the team stood in first place for the Guidelines and IAA tracks, and in second place for the Quantity and Consistency tracks.",
}
|
In this paper, we present our methodology and findings from participating in the FIGNEWS 2024 shared task on annotating news fragments on the Gaza-Israel war for bias and propaganda detection. The task aimed to refine the FIGNEWS 2024 annotation guidelines and to contribute to the creation of a comprehensive dataset to advance research in this field. Our team employed a multi-faceted approach to ensure high accuracy in data annotations. Our results highlight key challenges in detecting bias and propaganda, such as the need for more comprehensive guidelines. Our team ranked first in all tracks for propaganda annotation. For Bias, the team stood in first place for the Guidelines and IAA tracks, and in second place for the Quantity and Consistency tracks.
|
[
"Abdul Rauf, Sadaf",
"Sarfraz, Huda",
"Nauman, Saadia",
"Fatima, Arooj",
"SadafZiafat, SadafZiafat",
"Ishfaq, Momina",
"Suboor, Alishba",
"Afzal, Hammad",
"Latif, Seemab"
] |
{NLPC}olab at {F}ig{N}ews 2024 Shared Task: Challenges in Bias and Propaganda Annotation for News Media
|
arabicnlp-1.61
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.62.bib
|
@inproceedings{heierli-etal-2024-bias,
title = "Bias Bluff Busters at {FIGNEWS} 2024 Shared Task: Developing Guidelines to Make Bias Conscious",
author = "Heierli, Jasmin and
Pareti, Silvia and
Pareti, Serena and
Lando, Tatiana",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.62",
pages = "580--589",
abstract = "This paper details our participation in the FIGNEWS-2024 shared task on bias and propaganda annotation in Gaza conflict news. Our objectives were to develop robust guidelines and annotate a substantial dataset to enhance bias detection. We iteratively refined our guidelines and used examples for clarity. Key findings include the challenges in achieving high inter-annotator agreement and the importance of annotator awareness of their own biases. We also explored the integration of ChatGPT as an annotator to support consistency. This paper contributes to the field by providing detailed annotation guidelines, and offering insights into the subjectivity of bias annotation.",
}
|
This paper details our participation in the FIGNEWS-2024 shared task on bias and propaganda annotation in Gaza conflict news. Our objectives were to develop robust guidelines and annotate a substantial dataset to enhance bias detection. We iteratively refined our guidelines and used examples for clarity. Key findings include the challenges in achieving high inter-annotator agreement and the importance of annotator awareness of their own biases. We also explored the integration of ChatGPT as an annotator to support consistency. This paper contributes to the field by providing detailed annotation guidelines, and offering insights into the subjectivity of bias annotation.
|
[
"Heierli, Jasmin",
"Pareti, Silvia",
"Pareti, Serena",
"L",
"o, Tatiana"
] |
Bias Bluff Busters at {FIGNEWS} 2024 Shared Task: Developing Guidelines to Make Bias Conscious
|
arabicnlp-1.62
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.63.bib
|
@inproceedings{sadiah-etal-2024-ceasefire,
title = "Ceasefire at {FIGNEWS} 2024 Shared Task: Automated Detection and Annotation of Media Bias Using Large Language Models",
author = "Sadiah, Noor and
Al-Emadi, Sara and
Rahman, Sumaya",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.63",
pages = "590--600",
abstract = "In this paper, we present our approach for FIGNEWS Subtask 1, which focuses on detecting bias in news media narratives about the Israel war on Gaza. We used a Large Language Model (LLM) and prompt engineering, using GPT-3.5 Turbo API, to create a model that automatically flags biased news media content with 99{\%} accuracy. This approach provides Natural Language Processing (NLP) researchers with a robust and effective solution for automating bias detection in news media narratives using supervised learning algorithms. Additionally, this paper provides a detailed analysis of the labeled content, offering valuable insights into media bias in conflict reporting. Our work advances automated content analysis and enhances understanding of media bias.",
}
|
In this paper, we present our approach for FIGNEWS Subtask 1, which focuses on detecting bias in news media narratives about the Israel war on Gaza. We used a Large Language Model (LLM) and prompt engineering, using GPT-3.5 Turbo API, to create a model that automatically flags biased news media content with 99{\%} accuracy. This approach provides Natural Language Processing (NLP) researchers with a robust and effective solution for automating bias detection in news media narratives using supervised learning algorithms. Additionally, this paper provides a detailed analysis of the labeled content, offering valuable insights into media bias in conflict reporting. Our work advances automated content analysis and enhances understanding of media bias.
|
[
"Sadiah, Noor",
"Al-Emadi, Sara",
"Rahman, Sumaya"
] |
Ceasefire at {FIGNEWS} 2024 Shared Task: Automated Detection and Annotation of Media Bias Using Large Language Models
|
arabicnlp-1.63
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.64.bib
|
@inproceedings{solla-etal-2024-sahara,
title = "{S}ahara Pioneers at {FIGNEWS} 2024 Shared Task: Data Annotation Guidelines for Propaganda Detection in News Items",
author = "Solla, Marwa and
Ebrahem, Hassan and
Issa, Alya and
Harmain, Harmain and
Nwesri, Abdusalam",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.64",
pages = "601--608",
abstract = "In today{'}s digital age, the spread of propaganda through news channels has become a pressing concern. To address this issue, the research community has organized a shared task on detecting propaganda in news posts. This paper aims to present the work carried out at the University of Tripoli for the development and implementation of data annotation guidelines by a team of five annotators. The guidelines were used to annotate 2600 news articles. Each article is labeled as {``}propaganda{''}, {``}Not propaganda{''}, {``}Not Applicable{''}, or {``}Not clear{''}. The shared task results put our efforts in the third position among 6 participating teams in the consistency track.",
}
|
In today{'}s digital age, the spread of propaganda through news channels has become a pressing concern. To address this issue, the research community has organized a shared task on detecting propaganda in news posts. This paper aims to present the work carried out at the University of Tripoli for the development and implementation of data annotation guidelines by a team of five annotators. The guidelines were used to annotate 2600 news articles. Each article is labeled as {``}propaganda{''}, {``}Not propaganda{''}, {``}Not Applicable{''}, or {``}Not clear{''}. The shared task results put our efforts in the third position among 6 participating teams in the consistency track.
|
[
"Solla, Marwa",
"Ebrahem, Hassan",
"Issa, Alya",
"Harmain, Harmain",
"Nwesri, Abdusalam"
] |
{S}ahara Pioneers at {FIGNEWS} 2024 Shared Task: Data Annotation Guidelines for Propaganda Detection in News Items
|
arabicnlp-1.64
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.65.bib
|
@inproceedings{blqees-etal-2024-biasganda,
title = "{B}ias{G}anda at {FIGNEWS} 2024 Shared Task: A Quest to Uncover Biased Views in News Coverage",
author = "Blqees, Blqees and
Wardi, Al and
Al-Sibani, Malath and
Al-Siyabi, Hiba and
Zidjaly, Najma",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.65",
pages = "609--613",
abstract = "In this study, we aimed to identify biased language in a dataset provided by the FIGNEWS 2024 committee on the Gaza-Israel war. We classified entries into seven categories: Unbiased, Biased against Palestine, Biased against Israel, Biased against Others, Biased against both Palestine and Israel, Unclear, and Not Applicable. Our team reviewed the literature to develop a codebook of terminologies and definitions. By coding each example, we sought to detect language tendencies used by media outlets when reporting on the same event. The primary finding was that most examples were classified as {``}Biased against Palestine,{''} as all examined language data used one-sided terms to describe the October 7 event. The least used category was {``}Not Applicable,{''} reserved for irrelevant examples or those lacking context. It is recommended to use neutral and balanced language when reporting volatile political news.",
}
|
In this study, we aimed to identify biased language in a dataset provided by the FIGNEWS 2024 committee on the Gaza-Israel war. We classified entries into seven categories: Unbiased, Biased against Palestine, Biased against Israel, Biased against Others, Biased against both Palestine and Israel, Unclear, and Not Applicable. Our team reviewed the literature to develop a codebook of terminologies and definitions. By coding each example, we sought to detect language tendencies used by media outlets when reporting on the same event. The primary finding was that most examples were classified as {``}Biased against Palestine,{''} as all examined language data used one-sided terms to describe the October 7 event. The least used category was {``}Not Applicable,{''} reserved for irrelevant examples or those lacking context. It is recommended to use neutral and balanced language when reporting volatile political news.
|
[
"Blqees, Blqees",
"Wardi, Al",
"Al-Sibani, Malath",
"Al-Siyabi, Hiba",
"Zidjaly, Najma"
] |
{B}ias{G}anda at {FIGNEWS} 2024 Shared Task: A Quest to Uncover Biased Views in News Coverage
|
arabicnlp-1.65
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.66.bib
|
@inproceedings{helal-etal-2024-cyberequity,
title = "The {C}yber{E}quity Lab at {FIGNEWS} 2024 Shared Task: Annotating a Corpus of {F}acebook Posts to Label Bias and Propaganda in {G}aza-{I}srael War Coverage in Five Languages",
author = "Helal, Mohammed and
Jarrar, Radi and
Alkhanafseh, Mohammed and
Karakra, Abdallah and
Awadallah, Ruba",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.66",
pages = "614--619",
abstract = "This paper presents The{\_}CyberEquity{\_}Lab team{'}s participation in the FIGNEWS 2024 Shared Task (Zaghouani, et al., 2024). The task is to annotate a corpus of Facebook posts into bias and propaganda in covering the Gaza-Israel war. The posts represent news articles written in five different languages. The paper presents the guidelines of annotation that the team has adhered in identifying both bias and propaganda in coverage of this continuous conflict.",
}
|
This paper presents The{\_}CyberEquity{\_}Lab team{'}s participation in the FIGNEWS 2024 Shared Task (Zaghouani, et al., 2024). The task is to annotate a corpus of Facebook posts into bias and propaganda in covering the Gaza-Israel war. The posts represent news articles written in five different languages. The paper presents the guidelines of annotation that the team has adhered in identifying both bias and propaganda in coverage of this continuous conflict.
|
[
"Helal, Mohammed",
"Jarrar, Radi",
"Alkhanafseh, Mohammed",
"Karakra, Abdallah",
"Awadallah, Ruba"
] |
The {C}yber{E}quity Lab at {FIGNEWS} 2024 Shared Task: Annotating a Corpus of {F}acebook Posts to Label Bias and Propaganda in {G}aza-{I}srael War Coverage in Five Languages
|
arabicnlp-1.66
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.67.bib
|
@inproceedings{ruiz-fernandez-etal-2024-bsc,
title = "{BSC}-{LANGTECH} at {FIGNEWS} 2024 Shared Task: Exploring Semi-Automatic Bias Annotation using Frame Analysis",
author = "Ruiz-Fern{\'a}ndez, Valle and
Saiz, Jos{\'e} and
Gonzalez-Agirre, Aitor",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.67",
pages = "620--629",
abstract = "This paper introduces the methodology of BSC-LANGTECH team for the FIGNEWS 2024 Shared Task on News Media Narratives. Following the bias annotation subtask, we apply the theory and methods of framing analysis to develop guidelines to annotate bias in the corpus provided by the task organizators. The manual annotation of a subset, with which a moderate IAA agreement has been achieved, is further used in Deep Learning techniques to explore automatic annotation and test the reliability of our framework.",
}
|
This paper introduces the methodology of BSC-LANGTECH team for the FIGNEWS 2024 Shared Task on News Media Narratives. Following the bias annotation subtask, we apply the theory and methods of framing analysis to develop guidelines to annotate bias in the corpus provided by the task organizators. The manual annotation of a subset, with which a moderate IAA agreement has been achieved, is further used in Deep Learning techniques to explore automatic annotation and test the reliability of our framework.
|
[
"Ruiz-Fern{\\'a}ndez, Valle",
"Saiz, Jos{\\'e}",
"Gonzalez-Agirre, Aitor"
] |
{BSC}-{LANGTECH} at {FIGNEWS} 2024 Shared Task: Exploring Semi-Automatic Bias Annotation using Frame Analysis
|
arabicnlp-1.67
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.68.bib
|
@inproceedings{khatib-etal-2024-groningenannotatesgaza,
title = "{G}roningen{A}nnotates{G}aza at the {FIGNEWS} 2024 Shared Task: Analyzing Bias in Conflict Narratives",
author = "Khatib, Khalid and
Gemelli, Sara and
Heisterborg, Saskia and
Majumdar, Pritha and
Minnema, Gosse and
Muti, Arianna and
Solissa, Noa",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.68",
pages = "630--639",
abstract = "In this paper we report the development of our annotation methodology for the shared task FIGNEWS 2024. The objective of the shared task is to look into the layers of bias in how the war on Gaza is represented in media narrative. Our methodology follows the prescriptive paradigm, in which guidelines are detailed and refined through an iterative process in which edge cases are discussed and converged. Our IAA score (Krippendorff{'}s $\alpha$) is 0.420, highlighting the challenging and subjective nature of the task. Our results show that 52{\%} of posts were unbiased, 42{\%} biased against Palestine, 5{\%} biased against Israel, and 3{\%} biased against both. 16{\%} were unclear or not applicable.",
}
|
In this paper we report the development of our annotation methodology for the shared task FIGNEWS 2024. The objective of the shared task is to look into the layers of bias in how the war on Gaza is represented in media narrative. Our methodology follows the prescriptive paradigm, in which guidelines are detailed and refined through an iterative process in which edge cases are discussed and converged. Our IAA score (Krippendorff{'}s $\alpha$) is 0.420, highlighting the challenging and subjective nature of the task. Our results show that 52{\%} of posts were unbiased, 42{\%} biased against Palestine, 5{\%} biased against Israel, and 3{\%} biased against both. 16{\%} were unclear or not applicable.
|
[
"Khatib, Khalid",
"Gemelli, Sara",
"Heisterborg, Saskia",
"Majumdar, Pritha",
"Minnema, Gosse",
"Muti, Arianna",
"Solissa, Noa"
] |
{G}roningen{A}nnotates{G}aza at the {FIGNEWS} 2024 Shared Task: Analyzing Bias in Conflict Narratives
|
arabicnlp-1.68
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.69.bib
|
@inproceedings{duaibes-etal-2024-sina,
title = "Sina at {F}ig{N}ews 2024: Multilingual Datasets Annotated with Bias and Propaganda.",
author = "Duaibes, Lina and
Jaber, Areej and
Jarrar, Mustafa and
Qadi, Ahmad and
Qandeel, Mais",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.69",
pages = "640--645",
abstract = "The proliferation of bias and propaganda onsocial media is an increasingly significant concern,leading to the development of techniquesfor automatic detection. This article presents amultilingual corpus of 12, 000 Facebook postsfully annotated for bias and propaganda. Thecorpus was created as part of the FigNews2024 Shared Task on News Media Narrativesfor framing the Israeli War on Gaza. It coversvarious events during the War from October7, 2023 to January 31, 2024. The corpuscomprises 12, 000 posts in five languages (Arabic,Hebrew, English, French, and Hindi), with2, 400 posts for each language. The annotationprocess involved 10 graduate students specializingin Law. The Inter-Annotator Agreement(IAA) was used to evaluate the annotationsof the corpus, with an average IAA of 80.8{\%}for bias and 70.15{\%} for propaganda annotations.Our team was ranked among the bestperformingteams in both Bias and Propagandasubtasks. The corpus is open-source and availableat https://sina.birzeit.edu/fada",
}
|
The proliferation of bias and propaganda onsocial media is an increasingly significant concern,leading to the development of techniquesfor automatic detection. This article presents amultilingual corpus of 12, 000 Facebook postsfully annotated for bias and propaganda. Thecorpus was created as part of the FigNews2024 Shared Task on News Media Narrativesfor framing the Israeli War on Gaza. It coversvarious events during the War from October7, 2023 to January 31, 2024. The corpuscomprises 12, 000 posts in five languages (Arabic,Hebrew, English, French, and Hindi), with2, 400 posts for each language. The annotationprocess involved 10 graduate students specializingin Law. The Inter-Annotator Agreement(IAA) was used to evaluate the annotationsof the corpus, with an average IAA of 80.8{\%}for bias and 70.15{\%} for propaganda annotations.Our team was ranked among the bestperformingteams in both Bias and Propagandasubtasks. The corpus is open-source and availableat https://sina.birzeit.edu/fada
|
[
"Duaibes, Lina",
"Jaber, Areej",
"Jarrar, Mustafa",
"Qadi, Ahmad",
"Q",
"eel, Mais"
] |
Sina at {F}ig{N}ews 2024: Multilingual Datasets Annotated with Bias and Propaganda.
|
arabicnlp-1.69
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.70.bib
|
@inproceedings{al-mamari-etal-2024-squad,
title = "{SQU}ad at {FIGNEWS} 2024 Shared Task: Unmasking Bias in Social Media Through Data Analysis and Annotation",
author = "Al-Mamari, Asmahan and
Al-Farsi, Fatma and
Zidjaly, Najma",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.70",
pages = "646--650",
abstract = "This paper is a part of the FIGNEWS 2024 Datathon Shared Task and it aims to investigate bias and double standards in media coverage of the Gaza-Israel 2023-2024 conflict through a comprehensive analysis of news articles. The methodology integrated both manual labeling as well as the application of a natural language processing (NLP) tool, which is the Facebook/BART-large-MNLI model. The annotation process involved categorizing the dataset based on identified biases, following a set of guidelines in which categories of bias were defined by the team. The findings revealed that most of the media texts provided for analysis included bias against Palestine, whether it was through the use of biased vocabulary or even tone. It was also found that texts written in Hebrew contained the most bias against Palestine. In addition, when comparing annotations done by AAI-1 and AAI-2, the results turned out to be very similar, which might be mainly due to the clear annotation guidelines set by the annotators themselves. Thus, we recommend the use of clear guidelines to facilitate the process of annotation by future researchers.",
}
|
This paper is a part of the FIGNEWS 2024 Datathon Shared Task and it aims to investigate bias and double standards in media coverage of the Gaza-Israel 2023-2024 conflict through a comprehensive analysis of news articles. The methodology integrated both manual labeling as well as the application of a natural language processing (NLP) tool, which is the Facebook/BART-large-MNLI model. The annotation process involved categorizing the dataset based on identified biases, following a set of guidelines in which categories of bias were defined by the team. The findings revealed that most of the media texts provided for analysis included bias against Palestine, whether it was through the use of biased vocabulary or even tone. It was also found that texts written in Hebrew contained the most bias against Palestine. In addition, when comparing annotations done by AAI-1 and AAI-2, the results turned out to be very similar, which might be mainly due to the clear annotation guidelines set by the annotators themselves. Thus, we recommend the use of clear guidelines to facilitate the process of annotation by future researchers.
|
[
"Al-Mamari, Asmahan",
"Al-Farsi, Fatma",
"Zidjaly, Najma"
] |
{SQU}ad at {FIGNEWS} 2024 Shared Task: Unmasking Bias in Social Media Through Data Analysis and Annotation
|
arabicnlp-1.70
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.71.bib
|
@inproceedings{saleh-etal-2024-justiceleague,
title = "{J}ustice{L}eague at {FIGNEWS} 2024 Shared Task: Innovations in Bias Annotation",
author = "Saleh, Amr and
Mohamed, Huda and
Sayed, Hager",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.71",
pages = "651--655",
abstract = "In response to the evolving media representation of the Gaza-Israel conflict, this study aims to categorize news articles based on their bias towards specific entities. Our primary objective is to annotate news articles with labels that indicate their bias: {``}Unbiased{''}, {``}Biased against Palestine{''}, {``}Biased against Israel{''}, {``}Biased against both Palestine and Israel{''}, {``}Biased against others{''}, {``}Unclear{''}, or {``}Not Applicable{''}.The methodology involves a detailed annotation process where each article is carefully reviewed and labeled according to predefined guidelines. For instance, an article reporting factual events without derogatory language is labeled as {``}Unbiased{''}, while one using inflammatory language against Palestinians is marked as {``}Biased against Palestine{''}.Key findings include the identification of various degrees of bias in news articles, highlighting the importance of critical analysis in media consumption. This research contributes to the broader effort of understanding media bias and promoting unbiased journalism. Tools such as Google Drive and Google Sheets facilitated the annotation process, enabling efficient collaboration and data management among the annotators.Our work also includes comprehensive guidelines and examples to ensure consistent annotation, enhancing the reliability of the data.",
}
|
In response to the evolving media representation of the Gaza-Israel conflict, this study aims to categorize news articles based on their bias towards specific entities. Our primary objective is to annotate news articles with labels that indicate their bias: {``}Unbiased{''}, {``}Biased against Palestine{''}, {``}Biased against Israel{''}, {``}Biased against both Palestine and Israel{''}, {``}Biased against others{''}, {``}Unclear{''}, or {``}Not Applicable{''}.The methodology involves a detailed annotation process where each article is carefully reviewed and labeled according to predefined guidelines. For instance, an article reporting factual events without derogatory language is labeled as {``}Unbiased{''}, while one using inflammatory language against Palestinians is marked as {``}Biased against Palestine{''}.Key findings include the identification of various degrees of bias in news articles, highlighting the importance of critical analysis in media consumption. This research contributes to the broader effort of understanding media bias and promoting unbiased journalism. Tools such as Google Drive and Google Sheets facilitated the annotation process, enabling efficient collaboration and data management among the annotators.Our work also includes comprehensive guidelines and examples to ensure consistent annotation, enhancing the reliability of the data.
|
[
"Saleh, Amr",
"Mohamed, Huda",
"Sayed, Hager"
] |
{J}ustice{L}eague at {FIGNEWS} 2024 Shared Task: Innovations in Bias Annotation
|
arabicnlp-1.71
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.72.bib
|
@inproceedings{chan-etal-2024-eagles,
title = "Eagles at {FIGNEWS} 2024 Shared Task: A Context-informed Prescriptive Approach to Bias Detection in Contentious News Narratives",
author = "Chan, Amanda and
A.Baddar, Mai and
Baazaoui, Sofien",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.72",
pages = "656--671",
abstract = "This research paper presents an in-depth examination of bias identification in media content related to the Israel-Palestine war. Focusing on the annotation guidelines and process developed by our team of researchers, the document outlines a systematic approach to discerning bias in articles. Through meticulous analysis, key indicators of bias such as emotive language, weasel words, and loaded comparisons are identified and discussed. The paper also explores the delineation between facts and opinions, emphasizing the importance of maintaining objectivity in annotation. Ethical considerations, including the handling of sensitive data and the promotion of multipartiality among annotators, are carefully addressed. The annotation guidelines also include other ethical considerations such as identifying rumors, false information, exercising prudence and selective quotations. The research paper offers insights into the annotation experience, highlighting common mistakes and providing valuable guidelines for future research in bias identification. By providing a comprehensive framework for evaluating bias in media coverage of the Israel-Palestine war, this study contributes to a deeper understanding of the complexities inherent in media discourse surrounding contentious geopolitical issues.",
}
|
This research paper presents an in-depth examination of bias identification in media content related to the Israel-Palestine war. Focusing on the annotation guidelines and process developed by our team of researchers, the document outlines a systematic approach to discerning bias in articles. Through meticulous analysis, key indicators of bias such as emotive language, weasel words, and loaded comparisons are identified and discussed. The paper also explores the delineation between facts and opinions, emphasizing the importance of maintaining objectivity in annotation. Ethical considerations, including the handling of sensitive data and the promotion of multipartiality among annotators, are carefully addressed. The annotation guidelines also include other ethical considerations such as identifying rumors, false information, exercising prudence and selective quotations. The research paper offers insights into the annotation experience, highlighting common mistakes and providing valuable guidelines for future research in bias identification. By providing a comprehensive framework for evaluating bias in media coverage of the Israel-Palestine war, this study contributes to a deeper understanding of the complexities inherent in media discourse surrounding contentious geopolitical issues.
|
[
"Chan, Am",
"a",
"A.Baddar, Mai",
"Baazaoui, Sofien"
] |
Eagles at {FIGNEWS} 2024 Shared Task: A Context-informed Prescriptive Approach to Bias Detection in Contentious News Narratives
|
arabicnlp-1.72
|
Poster
|
2407.09327v1
|
https://aclanthology.org/2024.arabicnlp-1.73.bib
|
@inproceedings{bourahouat-amer-2024-guidelines,
title = "The Guidelines Specialists at {FIGNEWS} 2024 Shared Task: An annotation guideline to Unravel Bias in News Media Narratives Using a Linguistic Approach",
author = "Bourahouat, Ghizlane and
Amer, Samar",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.73",
pages = "672--676",
abstract = "This article presents the participation of {``}The Guideline Specialists{''} in the FIGNEWS 2024 Shared Task, which aims to unravel bias and propaganda in news media narratives surrounding the Gaza-Israel 2023-2024 war. Leveraging innovative annotation methodologies and drawing on a diverse team of annotators, our approach focuses on meticulously annotating news articles using a linguistic approach to uncover the intricate nuances of bias. By incorporating detailed examples and drawing on related work that show how language structure represented in the use of passive voice or the use of nominalization and the choice of vocabulary carry bias, our findings provide valuable insights into the representation of the Gaza-Israel conflict across various languages and cultures. The guideline we developed detected the bias against Gaza, against Israel and others by setting keywords that are based on linguistic background tested by the AntConc concordance tool. The result was an annotation guideline that have a solid base. Through this collaborative effort, we developed a guideline that contributes to fostering a deeper understanding of media narratives during one of the most critical moments in recent history.",
}
|
This article presents the participation of {``}The Guideline Specialists{''} in the FIGNEWS 2024 Shared Task, which aims to unravel bias and propaganda in news media narratives surrounding the Gaza-Israel 2023-2024 war. Leveraging innovative annotation methodologies and drawing on a diverse team of annotators, our approach focuses on meticulously annotating news articles using a linguistic approach to uncover the intricate nuances of bias. By incorporating detailed examples and drawing on related work that show how language structure represented in the use of passive voice or the use of nominalization and the choice of vocabulary carry bias, our findings provide valuable insights into the representation of the Gaza-Israel conflict across various languages and cultures. The guideline we developed detected the bias against Gaza, against Israel and others by setting keywords that are based on linguistic background tested by the AntConc concordance tool. The result was an annotation guideline that have a solid base. Through this collaborative effort, we developed a guideline that contributes to fostering a deeper understanding of media narratives during one of the most critical moments in recent history.
|
[
"Bourahouat, Ghizlane",
"Amer, Samar"
] |
The Guidelines Specialists at {FIGNEWS} 2024 Shared Task: An annotation guideline to Unravel Bias in News Media Narratives Using a Linguistic Approach
|
arabicnlp-1.73
|
Poster
|
2407.18147v1
|
https://aclanthology.org/2024.arabicnlp-1.74.bib
|
@inproceedings{alshammari-etal-2024-ksaa,
title = "{KSAA}-{CAD} Shared Task: Contemporary {A}rabic Dictionary for Reverse Dictionary and Word Sense Disambiguation",
author = "Alshammari, Waad and
Almazrua, Amal and
Al Wazrah, Asma and
Almatham, Rawan and
Alhoshan, Muneera and
Alosaimy, Abdulrahman",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.74",
pages = "677--685",
abstract = "This paper outlines the KSAA-CAD shared task, highlighting the Contemporary Arabic Language Dictionary within the scenario of developing a Reverse Dictionary (RD) system and enhancing Word Sense Disambiguation (WSD) capabilities. The first KSAA-RD (Al-Matham et al., 2023) highlighted significant gaps in the domain of RDs, which are designed to retrieve words by their meanings or definitions. This shared task comprises two tasks: RD and WSD. The RD task focuses on identifying word embeddings that most accurately match a given definition, termed a {``}gloss,{''} in Arabic. Conversely, the WSD task involves determining the specific meaning of a word in context, particularly when the word has multiple meanings. The winning team achieved the highest-ranking score of 0.0644 in RD using Electra embeddings. In this paper, we describe the methods employed by the participating teams and provide insights into the future direction of KSAA-CAD.",
}
|
This paper outlines the KSAA-CAD shared task, highlighting the Contemporary Arabic Language Dictionary within the scenario of developing a Reverse Dictionary (RD) system and enhancing Word Sense Disambiguation (WSD) capabilities. The first KSAA-RD (Al-Matham et al., 2023) highlighted significant gaps in the domain of RDs, which are designed to retrieve words by their meanings or definitions. This shared task comprises two tasks: RD and WSD. The RD task focuses on identifying word embeddings that most accurately match a given definition, termed a {``}gloss,{''} in Arabic. Conversely, the WSD task involves determining the specific meaning of a word in context, particularly when the word has multiple meanings. The winning team achieved the highest-ranking score of 0.0644 in RD using Electra embeddings. In this paper, we describe the methods employed by the participating teams and provide insights into the future direction of KSAA-CAD.
|
[
"Alshammari, Waad",
"Almazrua, Amal",
"Al Wazrah, Asma",
"Almatham, Rawan",
"Alhoshan, Muneera",
"Alosaimy, Abdulrahman"
] |
{KSAA}-{CAD} Shared Task: Contemporary {A}rabic Dictionary for Reverse Dictionary and Word Sense Disambiguation
|
arabicnlp-1.74
|
Poster
|
0806.2581v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.