bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
listlengths
1
45
title
stringlengths
21
206
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
9
12
https://aclanthology.org/2024.arabicnlp-1.75.bib
@inproceedings{chen-etal-2024-cher, title = "{C}her at {KSAA}-{CAD} 2024: Compressing Words and Definitions into the Same Space for {A}rabic Reverse Dictionary", author = "Chen, Pinzhen and Zhao, Zheng and Shao, Shun", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.75", pages = "686--691", abstract = "We present Team Cher{'}s submission to the ArabicNLP 2024 KSAA-CAD shared task on the reverse dictionary for Arabic{---}the retrieval of words using definitions as a query. Our approach is based on a multi-task learning framework that jointly learns reverse dictionary, definition generation, and reconstruction tasks. This work explores different tokenization strategies and compares retrieval performance for each embedding architecture. Evaluation using the KSAA-CAD benchmark demonstrates the effectiveness of our multi-task approach and provides insights into the reverse dictionary task for Arabic. It is worth highlighting that we achieve strong performance without using any external resources in addition to the provided training data.", }
We present Team Cher{'}s submission to the ArabicNLP 2024 KSAA-CAD shared task on the reverse dictionary for Arabic{---}the retrieval of words using definitions as a query. Our approach is based on a multi-task learning framework that jointly learns reverse dictionary, definition generation, and reconstruction tasks. This work explores different tokenization strategies and compares retrieval performance for each embedding architecture. Evaluation using the KSAA-CAD benchmark demonstrates the effectiveness of our multi-task approach and provides insights into the reverse dictionary task for Arabic. It is worth highlighting that we achieve strong performance without using any external resources in addition to the provided training data.
[ "Chen, Pinzhen", "Zhao, Zheng", "Shao, Shun" ]
{C}her at {KSAA}-{CAD} 2024: Compressing Words and Definitions into the Same Space for {A}rabic Reverse Dictionary
arabicnlp-1.75
Poster
2310.15823v3
https://aclanthology.org/2024.arabicnlp-1.76.bib
@inproceedings{alharbi-2024-mission, title = "{MISSION} at {KSAA}-{CAD} 2024: {A}ra{T}5 with {A}rabic Reverse Dictionary", author = "Alharbi, Thamer", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.76", pages = "692--696", abstract = "This research paper presents our approach for the KSAA-CAD 2024 competition, focusing on Arabic Reverse Dictionary (RD) task (Alshammari et al., 2024). Leveraging the functionalities of the Arabic Reverse Dictionary, our system allows users to input glosses and retrieve corresponding words. We provide all associated notebooks and developed models on GitHub and Hugging face, respectively. Our task entails working with a dataset comprising dictionary data and word embedding vectors, utilizing three different architectures of contextualized word embeddings: AraELECTRA, AraBERTv2, and camelBERT-MSA. We fine-tune the AraT5v2-base-1024 model for predicting each embedding, considering various hyperparameters for training and validation. Evaluation metrics include ranking accuracy, mean squared error (MSE), and cosine similarity. The results demonstrate the effectiveness of our approach on both development and test datasets, showcasing promising performance across different embedding types.", }
This research paper presents our approach for the KSAA-CAD 2024 competition, focusing on Arabic Reverse Dictionary (RD) task (Alshammari et al., 2024). Leveraging the functionalities of the Arabic Reverse Dictionary, our system allows users to input glosses and retrieve corresponding words. We provide all associated notebooks and developed models on GitHub and Hugging face, respectively. Our task entails working with a dataset comprising dictionary data and word embedding vectors, utilizing three different architectures of contextualized word embeddings: AraELECTRA, AraBERTv2, and camelBERT-MSA. We fine-tune the AraT5v2-base-1024 model for predicting each embedding, considering various hyperparameters for training and validation. Evaluation metrics include ranking accuracy, mean squared error (MSE), and cosine similarity. The results demonstrate the effectiveness of our approach on both development and test datasets, showcasing promising performance across different embedding types.
[ "Alharbi, Thamer" ]
{MISSION} at {KSAA}-{CAD} 2024: {A}ra{T}5 with {A}rabic Reverse Dictionary
arabicnlp-1.76
Poster
2310.15823v3
https://aclanthology.org/2024.arabicnlp-1.77.bib
@inproceedings{sibaee-etal-2024-asos-ksaa, title = "{ASOS} at {KSAA}-{CAD} 2024: One Embedding is All You Need for Your Dictionary", author = "Sibaee, Serry and Alharbi, Abdullah and Ahmad, Samar and Nacar, Omer and Koubaa, Anis and Ghouti, Lahouari", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.77", pages = "697--703", abstract = "Semantic search tasks have grown extremely fast following the advancements in large language models, including the Reverse Dictionary and Word Sense Disambiguation in Arabic. This paper describes our participation in the Contemporary Arabic Dictionary Shared Task. We propose two models that achieved first place in both tasks. We conducted comprehensive experiments on the latest five multilingual sentence transformers and the Arabic BERT model for semantic embedding extraction. We achieved a ranking score of 0.06 for the reverse dictionary task, which is double than last year{'}s winner. We had an accuracy score of 0.268 for the Word Sense Disambiguation task.", }
Semantic search tasks have grown extremely fast following the advancements in large language models, including the Reverse Dictionary and Word Sense Disambiguation in Arabic. This paper describes our participation in the Contemporary Arabic Dictionary Shared Task. We propose two models that achieved first place in both tasks. We conducted comprehensive experiments on the latest five multilingual sentence transformers and the Arabic BERT model for semantic embedding extraction. We achieved a ranking score of 0.06 for the reverse dictionary task, which is double than last year{'}s winner. We had an accuracy score of 0.268 for the Word Sense Disambiguation task.
[ "Sibaee, Serry", "Alharbi, Abdullah", "Ahmad, Samar", "Nacar, Omer", "Koubaa, Anis", "Ghouti, Lahouari" ]
{ASOS} at {KSAA}-{CAD} 2024: One Embedding is All You Need for Your Dictionary
arabicnlp-1.77
Poster
1007.3561v1
https://aclanthology.org/2024.arabicnlp-1.78.bib
@inproceedings{alheraki-meshoul-2024-baleegh, title = "Baleegh at {KSAA}-{CAD} 2024: Towards Enhancing {A}rabic Reverse Dictionaries", author = "Alheraki, Mais and Meshoul, Souham", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.78", pages = "704--708", abstract = "The domain of reverse dictionaries (RDs), while advancing in languages like English and Chinese, remains underdeveloped for Arabic. This study attempts to explore a data-driven approach to enhance word retrieval processes in Arabic RDs. The research focuses on the ArabicNLP 2024 Shared Task, named KSAA-CAD, which provides a dictionary dataset of 39,214 word-gloss pairs, each with a corresponding target word embedding. The proposed solution aims to surpass the baseline performance by employing SOTA deep learning models and innovative data expansion techniques. The methodology involves enriching the dataset with contextually relevant examples, training a T5 model to align the words to their glosses in the space, and evaluating the results on the shared task metrics. We find that our model is closely aligned with the baseline performance on bertseg and bertmsa targets, however does not perform well on electra target, suggesting the need for further exploration.", }
The domain of reverse dictionaries (RDs), while advancing in languages like English and Chinese, remains underdeveloped for Arabic. This study attempts to explore a data-driven approach to enhance word retrieval processes in Arabic RDs. The research focuses on the ArabicNLP 2024 Shared Task, named KSAA-CAD, which provides a dictionary dataset of 39,214 word-gloss pairs, each with a corresponding target word embedding. The proposed solution aims to surpass the baseline performance by employing SOTA deep learning models and innovative data expansion techniques. The methodology involves enriching the dataset with contextually relevant examples, training a T5 model to align the words to their glosses in the space, and evaluating the results on the shared task metrics. We find that our model is closely aligned with the baseline performance on bertseg and bertmsa targets, however does not perform well on electra target, suggesting the need for further exploration.
[ "Alheraki, Mais", "Meshoul, Souham" ]
Baleegh at {KSAA}-{CAD} 2024: Towards Enhancing {A}rabic Reverse Dictionaries
arabicnlp-1.78
Poster
2310.15823v3
https://aclanthology.org/2024.arabicnlp-1.79.bib
@inproceedings{abdul-mageed-etal-2024-nadi, title = "{NADI} 2024: The Fifth Nuanced {A}rabic Dialect Identification Shared Task", author = "Abdul-Mageed, Muhammad and Keleg, Amr and Elmadany, AbdelRahim and Zhang, Chiyu and Hamed, Injy and Magdy, Walid and Bouamor, Houda and Habash, Nizar", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.79", pages = "709--728", abstract = "We describe the findings of the fifth Nuanced Arabic Dialect Identification Shared Task (NADI 2024). NADI{'}s objective is to help advance SoTA Arabic NLP by providing guidance, datasets, modeling opportunities, and standardized evaluation conditions that allow researchers to collaboratively compete on prespecified tasks. NADI 2024 targeted both dialect identification cast as a multi-label task (Subtask 1), identification of the Arabic level of dialectness (Subtask 2), and dialect-to-MSA machine translation (Subtask 3). A total of 51 unique teams registered for the shared task, of whom 12 teams have participated (with 76 valid submissions during the test phase). Among these, three teams participated in Subtask 1, three in Subtask 2, and eight in Subtask 3. The winning teams achieved 50.57 F1 on Subtask 1, 0.1403 RMSE for Subtask 2, and 20.44 BLEU in Subtask 3, respectively. Results show that Arabic dialect processing tasks such as dialect identification and machine translation remain challenging. We describe the methods employed by the participating teams and briefly offer an outlook for NADI.", }
We describe the findings of the fifth Nuanced Arabic Dialect Identification Shared Task (NADI 2024). NADI{'}s objective is to help advance SoTA Arabic NLP by providing guidance, datasets, modeling opportunities, and standardized evaluation conditions that allow researchers to collaboratively compete on prespecified tasks. NADI 2024 targeted both dialect identification cast as a multi-label task (Subtask 1), identification of the Arabic level of dialectness (Subtask 2), and dialect-to-MSA machine translation (Subtask 3). A total of 51 unique teams registered for the shared task, of whom 12 teams have participated (with 76 valid submissions during the test phase). Among these, three teams participated in Subtask 1, three in Subtask 2, and eight in Subtask 3. The winning teams achieved 50.57 F1 on Subtask 1, 0.1403 RMSE for Subtask 2, and 20.44 BLEU in Subtask 3, respectively. Results show that Arabic dialect processing tasks such as dialect identification and machine translation remain challenging. We describe the methods employed by the participating teams and briefly offer an outlook for NADI.
[ "Abdul-Mageed, Muhammad", "Keleg, Amr", "Elmadany, AbdelRahim", "Zhang, Chiyu", "Hamed, Injy", "Magdy, Walid", "Bouamor, Houda", "Habash, Nizar" ]
{NADI} 2024: The Fifth Nuanced {A}rabic Dialect Identification Shared Task
arabicnlp-1.79
Poster
2407.04910v1
https://aclanthology.org/2024.arabicnlp-1.80.bib
@inproceedings{demidova-etal-2024-arabic, title = "{A}rabic Train at {NADI} 2024 shared task: {LLM}s{'} Ability to Translate {A}rabic Dialects into {M}odern {S}tandard {A}rabic", author = "Demidova, Anastasiia and Atwany, Hanin and Rabih, Nour and Sha{'}ban, Sanad", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.80", pages = "729--734", abstract = "Navigating the intricacies of machine translation (MT) involves tackling the nuanced disparities between Arabic dialects and Modern Standard Arabic (MSA), presenting a formidable obstacle. In this study, we delve into Subtask 3 of the NADI shared task (CITATION), focusing on the translation of sentences from four distinct Arabic dialects into MSA. Our investigation explores the efficacy of various models, including Jais, NLLB, GPT-3.5, and GPT-4, in this dialect-to-MSA translation endeavor. Our findings reveal that Jais surpasses all other models, boasting an average BLEU score of 19.48 in the combination of zero- and few-shot setting, whereas NLLB exhibits the least favorable performance, garnering a BLEU score of 8.77.", }
Navigating the intricacies of machine translation (MT) involves tackling the nuanced disparities between Arabic dialects and Modern Standard Arabic (MSA), presenting a formidable obstacle. In this study, we delve into Subtask 3 of the NADI shared task (CITATION), focusing on the translation of sentences from four distinct Arabic dialects into MSA. Our investigation explores the efficacy of various models, including Jais, NLLB, GPT-3.5, and GPT-4, in this dialect-to-MSA translation endeavor. Our findings reveal that Jais surpasses all other models, boasting an average BLEU score of 19.48 in the combination of zero- and few-shot setting, whereas NLLB exhibits the least favorable performance, garnering a BLEU score of 8.77.
[ "Demidova, Anastasiia", "Atwany, Hanin", "Rabih, Nour", "Sha{'}ban, Sanad" ]
{A}rabic Train at {NADI} 2024 shared task: {LLM}s{'} Ability to Translate {A}rabic Dialects into {M}odern {S}tandard {A}rabic
arabicnlp-1.80
Poster
2407.04910v1
https://aclanthology.org/2024.arabicnlp-1.81.bib
@inproceedings{sakr-etal-2024-alexunlp, title = "{A}lex{UNLP}-{STM} at {NADI} 2024 shared task: Quantifying the {A}rabic Dialect Spectrum with Contrastive Learning, Weighted Sampling, and {BERT}-based Regression Ensemble", author = "Sakr, Abdelrahman and Torki, Marwan and El-Makky, Nagwa", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.81", pages = "735--741", abstract = "Recognizing the nuanced spectrum of dialectness in Arabic text poses a significant challenge for natural language processing (NLP) tasks. Traditional dialect identification (DI) methods treat the task as binary, overlooking the continuum of dialect variation present in Arabic speech and text. In this paper, we describe our submission to the NADI shared Task of ArabicNLP 2024. We participated in Subtask 2 - ALDi Estimation, which focuses on estimating the Arabic Level of Dialectness (ALDi) for Arabic text, indicating how much it deviates from Modern Standard Arabic (MSA) on a scale from 0 to 1, where 0 means MSA and 1 means high divergence from MSA. We explore diverse training approaches, including contrastive learning, applying a random weighted sampler along with fine-tuning a regression task based on the AraBERT model, after adding a linear and non-linear layer on top of its pooled output. Finally, performing a brute force ensemble strategy increases the performance of our system. Our proposed solution achieved a Root Mean Squared Error (RMSE) of 0.1406, ranking second on the leaderboard.", }
Recognizing the nuanced spectrum of dialectness in Arabic text poses a significant challenge for natural language processing (NLP) tasks. Traditional dialect identification (DI) methods treat the task as binary, overlooking the continuum of dialect variation present in Arabic speech and text. In this paper, we describe our submission to the NADI shared Task of ArabicNLP 2024. We participated in Subtask 2 - ALDi Estimation, which focuses on estimating the Arabic Level of Dialectness (ALDi) for Arabic text, indicating how much it deviates from Modern Standard Arabic (MSA) on a scale from 0 to 1, where 0 means MSA and 1 means high divergence from MSA. We explore diverse training approaches, including contrastive learning, applying a random weighted sampler along with fine-tuning a regression task based on the AraBERT model, after adding a linear and non-linear layer on top of its pooled output. Finally, performing a brute force ensemble strategy increases the performance of our system. Our proposed solution achieved a Root Mean Squared Error (RMSE) of 0.1406, ranking second on the leaderboard.
[ "Sakr, Abdelrahman", "Torki, Marwan", "El-Makky, Nagwa" ]
{A}lex{UNLP}-{STM} at {NADI} 2024 shared task: Quantifying the {A}rabic Dialect Spectrum with Contrastive Learning, Weighted Sampling, and {BERT}-based Regression Ensemble
arabicnlp-1.81
Poster
2407.04910v1
https://aclanthology.org/2024.arabicnlp-1.82.bib
@inproceedings{kanjirangat-etal-2024-nlp, title = "{NLP}{\_}{DI} at {NADI} 2024 shared task: Multi-label {A}rabic Dialect Classifications with an Unsupervised Cross-Encoder", author = "Kanjirangat, Vani and Samardzic, Tanja and Dolamic, Ljiljana and Rinaldi, Fabio", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.82", pages = "742--747", abstract = "We report the approaches submitted to the NADI 2024 Subtask 1: Multi-label country-level Dialect Identification (MLDID). The core part was to adapt the information from multi-class data for a multi-label dialect classification task. We experimented with supervised and unsupervised strategies to tackle the task in this challenging setting. Under the supervised setup, we used the model trained using NADI 2023 data and devised approaches to convert the multi-class predictions to multi-label by using information from the confusion matrix or using calibrated probabilities. Under unsupervised settings, we used the Arabic-based sentence encoders and multilingual cross-encoders to retrieve similar samples from the training set, considering each test input as a query. The associated labels are then assigned to the input query. We also tried different variations, such as co-occurring dialects derived from the provided development set. We obtained the best validation performance of 48.5{\%} F-score using one of the variations with an unsupervised approach and the same approach yielded the best test result of 43.27{\%} (Ranked 2).", }
We report the approaches submitted to the NADI 2024 Subtask 1: Multi-label country-level Dialect Identification (MLDID). The core part was to adapt the information from multi-class data for a multi-label dialect classification task. We experimented with supervised and unsupervised strategies to tackle the task in this challenging setting. Under the supervised setup, we used the model trained using NADI 2023 data and devised approaches to convert the multi-class predictions to multi-label by using information from the confusion matrix or using calibrated probabilities. Under unsupervised settings, we used the Arabic-based sentence encoders and multilingual cross-encoders to retrieve similar samples from the training set, considering each test input as a query. The associated labels are then assigned to the input query. We also tried different variations, such as co-occurring dialects derived from the provided development set. We obtained the best validation performance of 48.5{\%} F-score using one of the variations with an unsupervised approach and the same approach yielded the best test result of 43.27{\%} (Ranked 2).
[ "Kanjirangat, Vani", "Samardzic, Tanja", "Dolamic, Ljiljana", "Rinaldi, Fabio" ]
{NLP}{\_}{DI} at {NADI} 2024 shared task: Multi-label {A}rabic Dialect Classifications with an Unsupervised Cross-Encoder
arabicnlp-1.82
Poster
2407.04910v1
https://aclanthology.org/2024.arabicnlp-1.83.bib
@inproceedings{nacar-etal-2024-asos-nadi, title = "{ASOS} at {NADI} 2024 shared task: Bridging Dialectness Estimation and {MSA} Machine Translation for {A}rabic Language Enhancement", author = "Nacar, Omer and Sibaee, Serry and Alharbi, Abdullah and Ghouti, Lahouari and Koubaa, Anis", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.83", pages = "748--753", abstract = "This study undertakes a comprehensive investigation of transformer-based models to advance Arabic language processing, focusing on two pivotal aspects: the estimation of Arabic Level of Dialectness and dialectal sentence-level machine translation into Modern Standard Arabic. We conducted various evaluations of different sentence transformers across a proposed regression model, showing that the MARBERT transformer-based proposed regression model achieved the best root mean square error of 0.1403 for Arabic Level of Dialectness estimation. In parallel, we developed bi-directional translation models between Modern Standard Arabic and four specific Arabic dialects{---}Egyptian, Emirati, Jordanian, and Palestinian{---}by fine-tuning and evaluating different sequence-to-sequence transformers. This approach significantly improved translation quality, achieving a BLEU score of 0.1713. We also enhanced our evaluation capabilities by integrating MSA predictions from the machine translation model into our Arabic Level of Dialectness estimation framework, forming a comprehensive pipeline that not only demonstrates the effectiveness of our methodologies but also establishes a new benchmark in the deployment of advanced Arabic NLP technologies.", }
This study undertakes a comprehensive investigation of transformer-based models to advance Arabic language processing, focusing on two pivotal aspects: the estimation of Arabic Level of Dialectness and dialectal sentence-level machine translation into Modern Standard Arabic. We conducted various evaluations of different sentence transformers across a proposed regression model, showing that the MARBERT transformer-based proposed regression model achieved the best root mean square error of 0.1403 for Arabic Level of Dialectness estimation. In parallel, we developed bi-directional translation models between Modern Standard Arabic and four specific Arabic dialects{---}Egyptian, Emirati, Jordanian, and Palestinian{---}by fine-tuning and evaluating different sequence-to-sequence transformers. This approach significantly improved translation quality, achieving a BLEU score of 0.1713. We also enhanced our evaluation capabilities by integrating MSA predictions from the machine translation model into our Arabic Level of Dialectness estimation framework, forming a comprehensive pipeline that not only demonstrates the effectiveness of our methodologies but also establishes a new benchmark in the deployment of advanced Arabic NLP technologies.
[ "Nacar, Omer", "Sibaee, Serry", "Alharbi, Abdullah", "Ghouti, Lahouari", "Koubaa, Anis" ]
{ASOS} at {NADI} 2024 shared task: Bridging Dialectness Estimation and {MSA} Machine Translation for {A}rabic Language Enhancement
arabicnlp-1.83
Poster
2407.04910v1
https://aclanthology.org/2024.arabicnlp-1.84.bib
@inproceedings{lichouri-etal-2024-dznlp, title = "dz{NLP} at {NADI} 2024 Shared Task: Multi-Classifier Ensemble with Weighted Voting and {TF}-{IDF} Features", author = "Lichouri, Mohamed and Lounnas, Khaled and Nadjib, Zahaf and Ayoub, Rabiai", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.84", pages = "754--757", abstract = "This paper presents the contribution of our dzNLP team to the NADI 2024 shared task, specifically in Subtask 1 - Multi-label Country-level Dialect Identification (MLDID) (Closed Track). We explored various configurations to address the challenge: in Experiment 1, we utilized a union of n-gram analyzers (word, character, character with word boundaries) with different n-gram values; in Experiment 2, we combined a weighted union of Term Frequency-Inverse Document Frequency (TF-IDF) features with various weights; and in Experiment 3, we implemented a weighted major voting scheme using three classifiers: Linear Support Vector Classifier (LSVC), Random Forest (RF), and K-Nearest Neighbors (KNN).Our approach, despite its simplicity and reliance on traditional machine learning techniques, demonstrated competitive performance in terms of accuracy and precision. Notably, we achieved the highest precision score of 63.22{\%} among the participating teams. However, our overall F1 score was approximately 21{\%}, significantly impacted by a low recall rate of 12.87{\%}. This indicates that while our models were highly precise, they struggled to recall a broad range of dialect labels, highlighting a critical area for improvement in handling diverse dialectal variations.", }
This paper presents the contribution of our dzNLP team to the NADI 2024 shared task, specifically in Subtask 1 - Multi-label Country-level Dialect Identification (MLDID) (Closed Track). We explored various configurations to address the challenge: in Experiment 1, we utilized a union of n-gram analyzers (word, character, character with word boundaries) with different n-gram values; in Experiment 2, we combined a weighted union of Term Frequency-Inverse Document Frequency (TF-IDF) features with various weights; and in Experiment 3, we implemented a weighted major voting scheme using three classifiers: Linear Support Vector Classifier (LSVC), Random Forest (RF), and K-Nearest Neighbors (KNN).Our approach, despite its simplicity and reliance on traditional machine learning techniques, demonstrated competitive performance in terms of accuracy and precision. Notably, we achieved the highest precision score of 63.22{\%} among the participating teams. However, our overall F1 score was approximately 21{\%}, significantly impacted by a low recall rate of 12.87{\%}. This indicates that while our models were highly precise, they struggled to recall a broad range of dialect labels, highlighting a critical area for improvement in handling diverse dialectal variations.
[ "Lichouri, Mohamed", "Lounnas, Khaled", "Nadjib, Zahaf", "Ayoub, Rabiai" ]
dz{NLP} at {NADI} 2024 Shared Task: Multi-Classifier Ensemble with Weighted Voting and {TF}-{IDF} Features
arabicnlp-1.84
Poster
2407.13608v1
https://aclanthology.org/2024.arabicnlp-1.85.bib
@inproceedings{karoui-etal-2024-elyadata, title = "{ELYADATA} at {NADI} 2024 shared task: {A}rabic Dialect Identification with Similarity-Induced Mono-to-Multi Label Transformation.", author = "Karoui, Amira and Gharbi, Farah and Kammoun, Rami and Laouirine, Imen and Bougares, Fethi", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.85", pages = "758--763", abstract = "This paper describes our submissions to the Multi-label Country-level Dialect Identification subtask of the NADI2024 shared task, organized during the second edition of the ArabicNLP conference. Our submission is based on the ensemble of fine-tuned BERT-based models, after implementing the Similarity-Induced Mono-to-Multi Label Transformation (SIMMT) on the input data. Our submission ranked first with a Macro-Average (MA) F1 score of 50.57{\%}.", }
This paper describes our submissions to the Multi-label Country-level Dialect Identification subtask of the NADI2024 shared task, organized during the second edition of the ArabicNLP conference. Our submission is based on the ensemble of fine-tuned BERT-based models, after implementing the Similarity-Induced Mono-to-Multi Label Transformation (SIMMT) on the input data. Our submission ranked first with a Macro-Average (MA) F1 score of 50.57{\%}.
[ "Karoui, Amira", "Gharbi, Farah", "Kammoun, Rami", "Laouirine, Imen", "Bougares, Fethi" ]
{ELYADATA} at {NADI} 2024 shared task: {A}rabic Dialect Identification with Similarity-Induced Mono-to-Multi Label Transformation.
arabicnlp-1.85
Poster
2407.04910v1
https://aclanthology.org/2024.arabicnlp-1.86.bib
@inproceedings{almusallam-ahmad-2024-alson, title = "Alson at {NADI} 2024 shared task: Alson - A fine-tuned model for {A}rabic Dialect Translation", author = "AlMusallam, Manan and Ahmad, Samar", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.86", pages = "764--768", abstract = "DA-MSA Machine Translation is a recentchallenge due to the multitude of Arabic dialects and their variations. In this paper, we present our results within the context of Subtask 3 of the NADI-2024 Shared Task(Abdul-Mageed et al., 2024) that is DA-MSA Machine Translation . We utilized the DIALECTS008MSA MADAR corpus (Bouamor et al., 2018),the Emi-NADI corpus for the Emirati dialect (Khered et al., 2023), and we augmented thePalestinian and Jordanian datasets based onNADI 2021. Our approach involves develop013ing sentence-level machine translations fromPalestinian, Jordanian, Emirati, and Egyptiandialects to Modern Standard Arabic (MSA).To016 address this challenge, we fine-tuned models such as (Nagoudi et al., 2022)AraT5v2-msa-small, AraT5v2-msa-base, and (Elmadanyet al., 2023)AraT5v2-base-1024 to comparetheir performance. Among these, the AraT5v2-base-1024 model achieved the best accuracy, with a BLEU score of 0.1650 on the develop023ment set and 0.1746 on the test set.", }
DA-MSA Machine Translation is a recentchallenge due to the multitude of Arabic dialects and their variations. In this paper, we present our results within the context of Subtask 3 of the NADI-2024 Shared Task(Abdul-Mageed et al., 2024) that is DA-MSA Machine Translation . We utilized the DIALECTS008MSA MADAR corpus (Bouamor et al., 2018),the Emi-NADI corpus for the Emirati dialect (Khered et al., 2023), and we augmented thePalestinian and Jordanian datasets based onNADI 2021. Our approach involves develop013ing sentence-level machine translations fromPalestinian, Jordanian, Emirati, and Egyptiandialects to Modern Standard Arabic (MSA).To016 address this challenge, we fine-tuned models such as (Nagoudi et al., 2022)AraT5v2-msa-small, AraT5v2-msa-base, and (Elmadanyet al., 2023)AraT5v2-base-1024 to comparetheir performance. Among these, the AraT5v2-base-1024 model achieved the best accuracy, with a BLEU score of 0.1650 on the develop023ment set and 0.1746 on the test set.
[ "AlMusallam, Manan", "Ahmad, Samar" ]
Alson at {NADI} 2024 shared task: Alson - A fine-tuned model for {A}rabic Dialect Translation
arabicnlp-1.86
Poster
2311.18739v1
https://aclanthology.org/2024.arabicnlp-1.87.bib
@inproceedings{ibrahim-2024-cufe, title = "{CUFE} at {NADI} 2024 shared task: Fine-Tuning Llama-3 To Translate From {A}rabic Dialects To {M}odern {S}tandard {A}rabic", author = "Ibrahim, Michael", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.87", pages = "769--773", abstract = "LLMs such as GPT-4 and LLaMA excel in multiple natural language processing tasks, however, LLMs face challenges in delivering satisfactory performance on low-resource languages due to limited availability of training data. In this paper, LLaMA-3 with 8 Billion parameters is finetuned to translate among Egyptian, Emirati, Jordanian, Palestinian Arabic dialects, and Modern Standard Arabic (MSA). In the NADI 2024 Task on DA-MSA Machine Translation, the proposed method achieved a BLEU score of 21.44 when it was fine-tuned on thedevelopment dataset of the NADI 2024 Task on DA-MSA and a BLEU score of 16.09 when trained when it was fine-tuned using the OSACT dataset.", }
LLMs such as GPT-4 and LLaMA excel in multiple natural language processing tasks, however, LLMs face challenges in delivering satisfactory performance on low-resource languages due to limited availability of training data. In this paper, LLaMA-3 with 8 Billion parameters is finetuned to translate among Egyptian, Emirati, Jordanian, Palestinian Arabic dialects, and Modern Standard Arabic (MSA). In the NADI 2024 Task on DA-MSA Machine Translation, the proposed method achieved a BLEU score of 21.44 when it was fine-tuned on thedevelopment dataset of the NADI 2024 Task on DA-MSA and a BLEU score of 16.09 when trained when it was fine-tuned using the OSACT dataset.
[ "Ibrahim, Michael" ]
{CUFE} at {NADI} 2024 shared task: Fine-Tuning Llama-3 To Translate From {A}rabic Dialects To {M}odern {S}tandard {A}rabic
arabicnlp-1.87
Poster
2407.04910v1
https://aclanthology.org/2024.arabicnlp-1.88.bib
@inproceedings{alturayeif-etal-2024-stanceeval, title = "$StanceEval 2024: The First Arabic Stance Detection Shared Task$", author = "Alturayeif, Nora and Luqman, Hamzah and Alyafeai, Zaid and Yamani, Asma", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.88", pages = "774--782", abstract = "Recently, there has been a growing interest in analyzing user-generated text to understand opinions expressed on social media. In NLP, this task is known as stance detection, where the goal is to predict whether the writer is in favor, against, or has no opinion on a given topic. Stance detection is crucial for applications such as sentiment analysis, opinion mining, and social media monitoring, as it helps in capturing the nuanced perspectives of users on various subjects. As part of the ArabicNLP 2024 program, we organized the first shared task on Arabic Stance Detection, StanceEval 2024. This initiative aimed to foster advancements in stance detection for the Arabic language, a relatively underrepresented area in Arabic NLP research. This overview paper provides a detailed description of the shared task, covering the dataset, the methodologies used by various teams, and a summary of the results from all participants. We received 28 unique team registrations, and during the testing phase, 16 teams submitted valid entries. The highest classification F-score obtained was 84.38.", }
Recently, there has been a growing interest in analyzing user-generated text to understand opinions expressed on social media. In NLP, this task is known as stance detection, where the goal is to predict whether the writer is in favor, against, or has no opinion on a given topic. Stance detection is crucial for applications such as sentiment analysis, opinion mining, and social media monitoring, as it helps in capturing the nuanced perspectives of users on various subjects. As part of the ArabicNLP 2024 program, we organized the first shared task on Arabic Stance Detection, StanceEval 2024. This initiative aimed to foster advancements in stance detection for the Arabic language, a relatively underrepresented area in Arabic NLP research. This overview paper provides a detailed description of the shared task, covering the dataset, the methodologies used by various teams, and a summary of the results from all participants. We received 28 unique team registrations, and during the testing phase, 16 teams submitted valid entries. The highest classification F-score obtained was 84.38.
[ "Alturayeif, Nora", "Luqman, Hamzah", "Alyafeai, Zaid", "Yamani, Asma" ]
$StanceEval 2024: The First Arabic Stance Detection Shared Task$
arabicnlp-1.88
Poster
2301.05863v1
https://aclanthology.org/2024.arabicnlp-1.89.bib
@inproceedings{galal-kaseb-2024-team, title = "{T}eam{\_}{Z}ero at {S}tance{E}val2024: Frozen {PLM}s for {A}rabic Stance Detection", author = "Galal, Omar and Kaseb, Abdelrahman", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.89", pages = "783--787", abstract = "This research explores the effectiveness of using pre-trained language models (PLMs) as feature extractors for Arabic stance detection on social media, focusing on topics like women empowerment, COVID-19 vaccination, and digital transformation. By leveraging sentence transformers to extract embeddings and incorporating aggregation architectures on top of BERT, we aim to achieve high performance without the computational expense of fine-tuning. Our approach demonstrates significant resource and time savings while maintaining competitive performance, scoring an F1-score of 78.62 on the test set. This study highlights the potential of PLMs in enhancing stance detection in Arabic social media analysis, offering a resource-efficient alternative to traditional fine-tuning methods.", }
This research explores the effectiveness of using pre-trained language models (PLMs) as feature extractors for Arabic stance detection on social media, focusing on topics like women empowerment, COVID-19 vaccination, and digital transformation. By leveraging sentence transformers to extract embeddings and incorporating aggregation architectures on top of BERT, we aim to achieve high performance without the computational expense of fine-tuning. Our approach demonstrates significant resource and time savings while maintaining competitive performance, scoring an F1-score of 78.62 on the test set. This study highlights the potential of PLMs in enhancing stance detection in Arabic social media analysis, offering a resource-efficient alternative to traditional fine-tuning methods.
[ "Galal, Omar", "Kaseb, Abdelrahman" ]
{T}eam{\_}{Z}ero at {S}tance{E}val2024: Frozen {PLM}s for {A}rabic Stance Detection
arabicnlp-1.89
Poster
2405.10991v1
https://aclanthology.org/2024.arabicnlp-1.90.bib
@inproceedings{amal-etal-2024-anlp, title = "{ANLP} {RG} at {S}tance{E}val2024: Comparative Evaluation of Stance, Sentiment and Sarcasm Detection", author = "Amal, Mezghani and Boujelbane, Rahma and Ellouze, Mariem", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.90", pages = "788--793", abstract = "As part of our study, we worked on three tasks:stance detection, sarcasm detection and senti-ment analysis using fine-tuning techniques onBERT-based models. Fine-tuning parameterswere carefully adjusted over multiple iterationsto maximize model performance. The threetasks are essential in the field of natural lan-guage processing (NLP) and present uniquechallenges. Stance detection is a critical taskaimed at identifying a writer{'}s stances or view-points in relation to a topic. Sarcasm detectionseeks to spot sarcastic expressions, while senti-ment analysis determines the attitude expressedin a text. After numerous experiments, we iden-tified Arabert-twitter as the model offering thebest performance for all three tasks. In particu-lar, it achieves a macro F-score of 78.08{\%} forstance detection, a macro F1-score of 59.51{\%}for sarcasm detection and a macro F1-score of64.57{\%} for sentiment detection. .Our source code is available at https://github.com/MezghaniAmal/Mawqif", }
As part of our study, we worked on three tasks:stance detection, sarcasm detection and senti-ment analysis using fine-tuning techniques onBERT-based models. Fine-tuning parameterswere carefully adjusted over multiple iterationsto maximize model performance. The threetasks are essential in the field of natural lan-guage processing (NLP) and present uniquechallenges. Stance detection is a critical taskaimed at identifying a writer{'}s stances or view-points in relation to a topic. Sarcasm detectionseeks to spot sarcastic expressions, while senti-ment analysis determines the attitude expressedin a text. After numerous experiments, we iden-tified Arabert-twitter as the model offering thebest performance for all three tasks. In particu-lar, it achieves a macro F-score of 78.08{\%} forstance detection, a macro F1-score of 59.51{\%}for sarcasm detection and a macro F1-score of64.57{\%} for sentiment detection. .Our source code is available at https://github.com/MezghaniAmal/Mawqif
[ "Amal, Mezghani", "Boujelbane, Rahma", "Ellouze, Mariem" ]
{ANLP} {RG} at {S}tance{E}val2024: Comparative Evaluation of Stance, Sentiment and Sarcasm Detection
arabicnlp-1.90
Poster
1611.04326v2
https://aclanthology.org/2024.arabicnlp-1.91.bib
@inproceedings{lichouri-etal-2024-dzstance, title = "dz{S}tance at {S}tance{E}val2024: {A}rabic Stance Detection based on Sentence Transformers", author = "Lichouri, Mohamed and Lounnas, Khaled and Rafik, Ouaras and ABi, Mohamed and Guechtouli, Anis", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.91", pages = "794--799", abstract = "This study compares Term Frequency-Inverse Document Frequency (TF-IDF) features with Sentence Transformers for detecting writers{'} stances{---}favorable, opposing, or neutral{---}towards three significant topics: COVID-19 vaccine, digital transformation, and women empowerment. Through empirical evaluation, we demonstrate that Sentence Transformers outperform TF-IDF features across various experimental setups. Our team, dzStance, participated in a stance detection competition, achieving the 13th position (74.91{\%}) among 15 teams in Women Empowerment, 10th (73.43{\%}) in COVID Vaccine, and 12th (66.97{\%}) in Digital Transformation. Overall, our team{'}s performance ranked 13th (71.77{\%}) among all participants. Notably, our approach achieved promising F1-scores, highlighting its effectiveness in identifying writers{'} stances on diverse topics. These results underscore the potential of Sentence Transformers to enhance stance detection models for addressing critical societal issues.", }
This study compares Term Frequency-Inverse Document Frequency (TF-IDF) features with Sentence Transformers for detecting writers{'} stances{---}favorable, opposing, or neutral{---}towards three significant topics: COVID-19 vaccine, digital transformation, and women empowerment. Through empirical evaluation, we demonstrate that Sentence Transformers outperform TF-IDF features across various experimental setups. Our team, dzStance, participated in a stance detection competition, achieving the 13th position (74.91{\%}) among 15 teams in Women Empowerment, 10th (73.43{\%}) in COVID Vaccine, and 12th (66.97{\%}) in Digital Transformation. Overall, our team{'}s performance ranked 13th (71.77{\%}) among all participants. Notably, our approach achieved promising F1-scores, highlighting its effectiveness in identifying writers{'} stances on diverse topics. These results underscore the potential of Sentence Transformers to enhance stance detection models for addressing critical societal issues.
[ "Lichouri, Mohamed", "Lounnas, Khaled", "Rafik, Ouaras", "ABi, Mohamed", "Guechtouli, Anis" ]
dz{S}tance at {S}tance{E}val2024: {A}rabic Stance Detection based on Sentence Transformers
arabicnlp-1.91
Poster
2407.13603v1
https://aclanthology.org/2024.arabicnlp-1.92.bib
@inproceedings{hariri-abu-farha-2024-smash-stanceeval, title = "{SMASH} at {S}tance{E}val 2024: Prompt Engineering {LLM}s for {A}rabic Stance Detection", author = "Hariri, Youssef and Abu Farha, Ibrahim", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.92", pages = "800--806", abstract = "This paper presents our submission for the Stance Detection in Arabic Language (StanceEval) 2024 shared task conducted by Team SMASH of the University of Edinburgh. We evaluated the performance of various BERT-based and large language models (LLMs). MARBERT demonstrates superior performance among the BERT-based models, achieving F1 and macro-F1 scores of 0.570 and 0.770, respectively. In contrast, Command R model outperforms all models with the highest overall F1 score of 0.661 and macro F1 score of 0.820.", }
This paper presents our submission for the Stance Detection in Arabic Language (StanceEval) 2024 shared task conducted by Team SMASH of the University of Edinburgh. We evaluated the performance of various BERT-based and large language models (LLMs). MARBERT demonstrates superior performance among the BERT-based models, achieving F1 and macro-F1 scores of 0.570 and 0.770, respectively. In contrast, Command R model outperforms all models with the highest overall F1 score of 0.661 and macro F1 score of 0.820.
[ "Hariri, Youssef", "Abu Farha, Ibrahim" ]
{SMASH} at {S}tance{E}val 2024: Prompt Engineering {LLM}s for {A}rabic Stance Detection
arabicnlp-1.92
Poster
2204.13979v1
https://aclanthology.org/2024.arabicnlp-1.93.bib
@inproceedings{ibrahim-2024-cufe-stanceeval2024, title = "{CUFE} at {S}tance{E}val2024: {A}rabic Stance Detection with Fine-Tuned Llama-3 Model", author = "Ibrahim, Michael", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.93", pages = "807--810", abstract = "In NLP, stance detection identifies a writer{'}s position or viewpoint on a particular topic or entity from their text and social media activity, which includes preferences and relationships.Researchers have been exploring techniques and approaches to develop effective stance detection systems.Large language models{'} latest advancements offer a more effective solution to the stance detection problem. This paper proposes fine-tuning the newly released 8B-parameter Llama 3 model from Meta GenAI for Arabic text stance detection.The proposed method was ranked ninth in the StanceEval 2024 Task on stance detection in Arabic language achieving a Macro average $F_1$ score of 0.7647.", }
In NLP, stance detection identifies a writer{'}s position or viewpoint on a particular topic or entity from their text and social media activity, which includes preferences and relationships.Researchers have been exploring techniques and approaches to develop effective stance detection systems.Large language models{'} latest advancements offer a more effective solution to the stance detection problem. This paper proposes fine-tuning the newly released 8B-parameter Llama 3 model from Meta GenAI for Arabic text stance detection.The proposed method was ranked ninth in the StanceEval 2024 Task on stance detection in Arabic language achieving a Macro average $F_1$ score of 0.7647.
[ "Ibrahim, Michael" ]
{CUFE} at {S}tance{E}val2024: {A}rabic Stance Detection with Fine-Tuned Llama-3 Model
arabicnlp-1.93
Poster
1908.03146v1
https://aclanthology.org/2024.arabicnlp-1.94.bib
@inproceedings{hasanaath-alansari-2024-stancecrafters, title = "{S}tance{C}rafters at {S}tance{E}val2024: Multi-task Stance Detection using {BERT} Ensemble with Attention Based Aggregation", author = "Hasanaath, Ahmed and Alansari, Aisha", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.94", pages = "811--815", abstract = "Stance detection is a key NLP problem that classifies a writer{'}s viewpoint on a topic based on their writing. This paper outlines our approach for Stance Detection in Arabic Language Shared Task (StanceEval2024), focusing on attitudes towards the COVID-19 vaccine, digital transformation, and women{'}s empowerment. The proposed model uses parallel multi-task learning with two fine-tuned BERT-based models combined via an attention module. Results indicate this ensemble outperforms a single BERT model, demonstrating the benefits of using BERT architectures trained on diverse datasets. Specifically, Arabert-Twitterv2, trained on tweets, and Camel-Lab, trained on Modern Standard Arabic (MSA), Dialectal Arabic (DA), and Classical Arabic (CA), allowed us to leverage diverse Arabic dialects and styles.", }
Stance detection is a key NLP problem that classifies a writer{'}s viewpoint on a topic based on their writing. This paper outlines our approach for Stance Detection in Arabic Language Shared Task (StanceEval2024), focusing on attitudes towards the COVID-19 vaccine, digital transformation, and women{'}s empowerment. The proposed model uses parallel multi-task learning with two fine-tuned BERT-based models combined via an attention module. Results indicate this ensemble outperforms a single BERT model, demonstrating the benefits of using BERT architectures trained on diverse datasets. Specifically, Arabert-Twitterv2, trained on tweets, and Camel-Lab, trained on Modern Standard Arabic (MSA), Dialectal Arabic (DA), and Classical Arabic (CA), allowed us to leverage diverse Arabic dialects and styles.
[ "Hasanaath, Ahmed", "Alansari, Aisha" ]
{S}tance{C}rafters at {S}tance{E}val2024: Multi-task Stance Detection using {BERT} Ensemble with Attention Based Aggregation
arabicnlp-1.94
Poster
2211.03061v1
https://aclanthology.org/2024.arabicnlp-1.95.bib
@inproceedings{alghaslan-almutairy-2024-mgkm, title = "{MGKM} at {S}tance{E}val2024 Fine-Tuning Large Language Models for {A}rabic Stance Detection", author = "Alghaslan, Mamoun and Almutairy, Khaled", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.95", pages = "816--822", abstract = "Social media platforms have become essential in daily life, enabling users to express their opinions and stances on various topics. Stance detection, which identifies the viewpoint expressed in text toward a target, has predominantly focused on English. MAWQIF is the pioneering Arabic dataset for target-specific stance detection, consisting of 4,121 tweets annotated with stance, sentiment, and sarcasm. The original dataset, benchmarked on four BERT-based models, achieved a best macro-F1 score of 78.89, indicating significant room for improvement. This study evaluates the effectiveness of three Large Language Models (LLMs) in detecting target-specific stances in MAWQIF. The LLMs assessed are ChatGPT-3.5-turbo, Meta-Llama-3-8B-Instruct, and Falcon-7B-Instruct. Performance was measured using both zero-shot and full fine-tuning approaches. Our findings demonstrate that fine-tuning substantially enhances the stance detection capabilities of LLMs in Arabic tweets. Notably, GPT-3.5-Turbo achieved the highest performance with a macro-F1 score of 82.93, underscoring the potential of fine-tuned LLMs for language-specific applications.", }
Social media platforms have become essential in daily life, enabling users to express their opinions and stances on various topics. Stance detection, which identifies the viewpoint expressed in text toward a target, has predominantly focused on English. MAWQIF is the pioneering Arabic dataset for target-specific stance detection, consisting of 4,121 tweets annotated with stance, sentiment, and sarcasm. The original dataset, benchmarked on four BERT-based models, achieved a best macro-F1 score of 78.89, indicating significant room for improvement. This study evaluates the effectiveness of three Large Language Models (LLMs) in detecting target-specific stances in MAWQIF. The LLMs assessed are ChatGPT-3.5-turbo, Meta-Llama-3-8B-Instruct, and Falcon-7B-Instruct. Performance was measured using both zero-shot and full fine-tuning approaches. Our findings demonstrate that fine-tuning substantially enhances the stance detection capabilities of LLMs in Arabic tweets. Notably, GPT-3.5-Turbo achieved the highest performance with a macro-F1 score of 82.93, underscoring the potential of fine-tuned LLMs for language-specific applications.
[ "Alghaslan, Mamoun", "Almutairy, Khaled" ]
{MGKM} at {S}tance{E}val2024 Fine-Tuning Large Language Models for {A}rabic Stance Detection
arabicnlp-1.95
Poster
2407.13603v1
https://aclanthology.org/2024.arabicnlp-1.96.bib
@inproceedings{badran-etal-2024-alexunlp, title = "{A}lex{UNLP}-{BH} at {S}tance{E}val2024: Multiple Contrastive Losses Ensemble Strategy with Multi-Task Learning For Stance Detection in {A}rabic", author = "Badran, Mohamed and Hamdy, Mo{'}men and Torki, Marwan and El-Makky, Nagwa", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.96", pages = "823--827", abstract = "Stance detection, an evolving task in natural language processing, involves understanding a writer{'}s perspective on certain topics by analyzing his written text and interactions online, especially on social media platforms. In this paper, we outline our submission to the StanceEval task, leveraging the Mawqif dataset featured in The Second Arabic Natural Language Processing Conference. Our task is to detect writers{'} stances (Favor, Against, or None) towards three selected topics (COVID-19 vaccine, digital transformation, and women empowerment). We present our approach primarily relying on a contrastive loss ensemble strategy. Our proposed approach achieved an F1-score of 0.8438 and ranked first in the stanceEval 2024 task. The code and checkpoints are availableat https://github.com/MBadran2000/Mawqif.git", }
Stance detection, an evolving task in natural language processing, involves understanding a writer{'}s perspective on certain topics by analyzing his written text and interactions online, especially on social media platforms. In this paper, we outline our submission to the StanceEval task, leveraging the Mawqif dataset featured in The Second Arabic Natural Language Processing Conference. Our task is to detect writers{'} stances (Favor, Against, or None) towards three selected topics (COVID-19 vaccine, digital transformation, and women empowerment). We present our approach primarily relying on a contrastive loss ensemble strategy. Our proposed approach achieved an F1-score of 0.8438 and ranked first in the stanceEval 2024 task. The code and checkpoints are availableat https://github.com/MBadran2000/Mawqif.git
[ "Badran, Mohamed", "Hamdy, Mo{'}men", "Torki, Marwan", "El-Makky, Nagwa" ]
{A}lex{UNLP}-{BH} at {S}tance{E}val2024: Multiple Contrastive Losses Ensemble Strategy with Multi-Task Learning For Stance Detection in {A}rabic
arabicnlp-1.96
Poster
2405.10991v1
https://aclanthology.org/2024.arabicnlp-1.97.bib
@inproceedings{alshenaifi-etal-2024-rasid, title = "Rasid at {S}tance{E}val: Fine-tuning {MARBERT} for {A}rabic Stance Detection", author = "AlShenaifi, Nouf and Alangari, Nourah and Al-Negheimish, Hadeel", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.97", pages = "828--831", abstract = "As social media usage continues to rise, the demand for systems to analyze opinions and sentiments expressed in textual data has become more critical. This paper presents our submission to the Stance Detection in Arabic Language Shared Task, in which we evaluated three models: the fine-tuned MARBERT Transformer, the fine-tuned AraBERT Transformer, and an Ensemble of Machine learning Classifiers. Our findings indicate that the MARBERT Transformer outperformed the other models in performance across all targets. In contrast, the Ensemble Classifier, which combines traditional machine learning techniques, demonstrated relatively lower effectiveness.", }
As social media usage continues to rise, the demand for systems to analyze opinions and sentiments expressed in textual data has become more critical. This paper presents our submission to the Stance Detection in Arabic Language Shared Task, in which we evaluated three models: the fine-tuned MARBERT Transformer, the fine-tuned AraBERT Transformer, and an Ensemble of Machine learning Classifiers. Our findings indicate that the MARBERT Transformer outperformed the other models in performance across all targets. In contrast, the Ensemble Classifier, which combines traditional machine learning techniques, demonstrated relatively lower effectiveness.
[ "AlShenaifi, Nouf", "Alangari, Nourah", "Al-Negheimish, Hadeel" ]
Rasid at {S}tance{E}val: Fine-tuning {MARBERT} for {A}rabic Stance Detection
arabicnlp-1.97
Poster
2103.01065v1
https://aclanthology.org/2024.arabicnlp-1.98.bib
@inproceedings{jaballah-2024-ishfmg, title = "{ISHFMG}{\_}{TUN} at {S}tance{E}val: Ensemble Method for {A}rabic Stance Evaluation System", author = "Jaballah, Mustapha", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.98", pages = "832--836", abstract = "It is essential to understand the attitude of individuals towards specific topics in Arabic language for tasks like sentiment analysis, opinion mining, and social media monitoring. However, the diversity of the linguistic characteristics of the Arabic language presents several challenges to accurately evaluate the stance. In this study, we suggest ensemble approach to tackle these challenges. Our method combines different classifiers using the voting method. Through multiple experiments, we prove the effectiveness of our method achieving significant F1-score value equal to 0.7027. Our findings contribute to promoting NLP and offer treasured enlightenment for applications like sentiment analysis, opinion mining, and social media monitoring.", }
It is essential to understand the attitude of individuals towards specific topics in Arabic language for tasks like sentiment analysis, opinion mining, and social media monitoring. However, the diversity of the linguistic characteristics of the Arabic language presents several challenges to accurately evaluate the stance. In this study, we suggest ensemble approach to tackle these challenges. Our method combines different classifiers using the voting method. Through multiple experiments, we prove the effectiveness of our method achieving significant F1-score value equal to 0.7027. Our findings contribute to promoting NLP and offer treasured enlightenment for applications like sentiment analysis, opinion mining, and social media monitoring.
[ "Jaballah, Mustapha" ]
{ISHFMG}{\_}{TUN} at {S}tance{E}val: Ensemble Method for {A}rabic Stance Evaluation System
arabicnlp-1.98
Poster
2204.13979v1
https://aclanthology.org/2024.arabicnlp-1.99.bib
@inproceedings{shukla-etal-2024-pict, title = "{PICT} at {S}tance{E}val2024: Stance Detection in {A}rabic using Ensemble of Large Language Models", author = "Shukla, Ishaan and Vaidya, Ankit and Kale, Geetanjali", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.99", pages = "837--841", abstract = "This paper outlines our approach to the StanceEval 2024- Arabic Stance Evaluation shared task. The goal of the task was to identify the stance, one out of three (Favor, Against or None) towards tweets based on three topics, namely- COVID-19 Vaccine, Digital Transformation and Women Empowerment. Our approach consists of fine-tuning BERT-based models efficiently for both, Single-Task Learning as well as Multi-Task Learning, the details of which are discussed. Finally, an ensemble was implemented on the best-performing models to maximize overall performance. We achieved a macro F1 score of 78.02{\%} in this shared task. Our codebase is available publicly.", }
This paper outlines our approach to the StanceEval 2024- Arabic Stance Evaluation shared task. The goal of the task was to identify the stance, one out of three (Favor, Against or None) towards tweets based on three topics, namely- COVID-19 Vaccine, Digital Transformation and Women Empowerment. Our approach consists of fine-tuning BERT-based models efficiently for both, Single-Task Learning as well as Multi-Task Learning, the details of which are discussed. Finally, an ensemble was implemented on the best-performing models to maximize overall performance. We achieved a macro F1 score of 78.02{\%} in this shared task. Our codebase is available publicly.
[ "Shukla, Ishaan", "Vaidya, Ankit", "Kale, Geetanjali" ]
{PICT} at {S}tance{E}val2024: Stance Detection in {A}rabic using Ensemble of Large Language Models
arabicnlp-1.99
Poster
2005.08946v1
https://aclanthology.org/2024.arabicnlp-1.100.bib
@inproceedings{melhem-etal-2024-tao, title = "{TAO} at {S}tance{E}val2024 Shared Task: {A}rabic Stance Detection using {A}ra{BERT}", author = "Melhem, Anas and Hamed, Osama and Sammar, Thaer", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.100", pages = "842--846", abstract = "In this paper, we present a high-performing model for Arabic stance detection on the STANCEEVAL2024 shared task part ofARABICNLP2024. Our model leverages ARABERTV1; a pre-trained Arabic language model, within a single-task learning framework. We fine-tuned the model on stance detection data for three specific topics: COVID19 vaccine, digital transformation, and women empowerment, extracted from the MAWQIF corpus. In terms of performance, our model achieves 73.30 macro-F1 score for women empowerment, 70.51 for digital transformation, and 64.55 for COVID-19 vaccine detection.", }
In this paper, we present a high-performing model for Arabic stance detection on the STANCEEVAL2024 shared task part ofARABICNLP2024. Our model leverages ARABERTV1; a pre-trained Arabic language model, within a single-task learning framework. We fine-tuned the model on stance detection data for three specific topics: COVID19 vaccine, digital transformation, and women empowerment, extracted from the MAWQIF corpus. In terms of performance, our model achieves 73.30 macro-F1 score for women empowerment, 70.51 for digital transformation, and 64.55 for COVID-19 vaccine detection.
[ "Melhem, Anas", "Hamed, Osama", "Sammar, Thaer" ]
{TAO} at {S}tance{E}val2024 Shared Task: {A}rabic Stance Detection using {A}ra{BERT}
arabicnlp-1.100
Poster
2407.13603v1
https://aclanthology.org/2024.arabicnlp-1.101.bib
@inproceedings{jarrar-etal-2024-wojoodner, title = "{W}ojood{NER} 2024: The Second {A}rabic Named Entity Recognition Shared Task", author = "Jarrar, Mustafa and Hamad, Nagham and Khalilia, Mohammed and Talafha, Bashar and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.101", pages = "847--857", abstract = "We present WojoodNER-2024, the second Arabic Named Entity Recognition (NER) Shared Task. In WojoodNER-2024, we focus on fine-grained Arabic NER. We provided participants with a new Arabic fine-grained NER dataset called Wojoodfine, annotated with subtypes of entities. WojoodNER-2024 encompassed three subtasks: ($i$) Closed-Track Flat Fine-Grained NER, ($ii$) Closed-Track Nested Fine-Grained NER, and ($iii$) an Open-Track NER for the Israeli War on Gaza. A total of 43 unique teams registered for this shared task. Five teams participated in the Flat Fine-Grained Subtask, among which two teams tackled the Nested Fine-Grained Subtask and one team participated in the Open-Track NER Subtask. The winning teams achieved $F_1$ scores of 91{\%} and 92{\%} in the Flat Fine-Grained and Nested Fine-Grained Subtasks, respectively. The sole team in the Open-Track Subtask achieved an $F_1$ score of 73.7{\%}.", }
We present WojoodNER-2024, the second Arabic Named Entity Recognition (NER) Shared Task. In WojoodNER-2024, we focus on fine-grained Arabic NER. We provided participants with a new Arabic fine-grained NER dataset called Wojoodfine, annotated with subtypes of entities. WojoodNER-2024 encompassed three subtasks: ($i$) Closed-Track Flat Fine-Grained NER, ($ii$) Closed-Track Nested Fine-Grained NER, and ($iii$) an Open-Track NER for the Israeli War on Gaza. A total of 43 unique teams registered for this shared task. Five teams participated in the Flat Fine-Grained Subtask, among which two teams tackled the Nested Fine-Grained Subtask and one team participated in the Open-Track NER Subtask. The winning teams achieved $F_1$ scores of 91{\%} and 92{\%} in the Flat Fine-Grained and Nested Fine-Grained Subtasks, respectively. The sole team in the Open-Track Subtask achieved an $F_1$ score of 73.7{\%}.
[ "Jarrar, Mustafa", "Hamad, Nagham", "Khalilia, Mohammed", "Talafha, Bashar", "Elmadany, AbdelRahim", "Abdul-Mageed, Muhammad" ]
{W}ojood{NER} 2024: The Second {A}rabic Named Entity Recognition Shared Task
arabicnlp-1.101
Poster
2310.16153v1
https://aclanthology.org/2024.arabicnlp-1.102.bib
@inproceedings{alotaibi-etal-2024-munera, title = "mu{NER}a at {W}ojood{NER} 2024: Multi-tasking {NER} Approach", author = "Alotaibi, Nouf and Alhomoud, Haneen and Murayshid, Hanan and Alshammari, Waad and Alshalawi, Nouf and Alkhereyf, Sakhar", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.102", pages = "858--866", abstract = "This paper presents our system {``}muNERa{''}, submitted to the WojoodNER 2024 shared task at the second ArabicNLP conference. We participated in two subtasks, the flat and nested fine-grained NER sub-tasks (1 and 2). muNERa achieved first place in the nested NER sub-task and second place in the flat NER sub-task. The system is based on the TANL framework (CITATION),by using a sequence-to-sequence structured language translation approach to model both tasks. We utilize the pre-trained AraT5v2-base model as the base model for the TANL framework. The best-performing muNERa model achieves 91.07{\%} and 90.26{\%} for the F-1 scores on the test sets for the nested and flat subtasks, respectively.", }
This paper presents our system {``}muNERa{''}, submitted to the WojoodNER 2024 shared task at the second ArabicNLP conference. We participated in two subtasks, the flat and nested fine-grained NER sub-tasks (1 and 2). muNERa achieved first place in the nested NER sub-task and second place in the flat NER sub-task. The system is based on the TANL framework (CITATION),by using a sequence-to-sequence structured language translation approach to model both tasks. We utilize the pre-trained AraT5v2-base model as the base model for the TANL framework. The best-performing muNERa model achieves 91.07{\%} and 90.26{\%} for the F-1 scores on the test sets for the nested and flat subtasks, respectively.
[ "Alotaibi, Nouf", "Alhomoud, Haneen", "Murayshid, Hanan", "Alshammari, Waad", "Alshalawi, Nouf", "Alkhereyf, Sakhar" ]
mu{NER}a at {W}ojood{NER} 2024: Multi-tasking {NER} Approach
arabicnlp-1.102
Poster
2310.16153v1
https://aclanthology.org/2024.arabicnlp-1.103.bib
@inproceedings{yahia-etal-2024-addax, title = "Addax at {W}ojood{NER} 2024: Attention-Based Dual-Channel Neural Network for {A}rabic Named Entity Recognition", author = "Yahia, Issam and Atou, Houdaifa and Berrada, Ismail", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.103", pages = "867--873", abstract = "Named Entity Recognition (NER) is a fundamental task in Natural Language Processing (NLP) that focuses on extracting entities such as names of people, organizations, locations, and dates from text. Despite significant advancements due to deep learning and transformer architectures like BERT, NER still faces challenges, particularly in low-resource languages like Arabic. This paper presents a BERT-based NER system that utilizes a two-channel parallel hybrid neural network with an attention mechanism specifically designed for the NER Shared Task 2024. In the competition, our approach ranked second by scoring 90.13{\%} in micro-F1 on the test set. The results demonstrate the effectiveness of combining advanced neural network architectures with contextualized word embeddings in improving NER performance for Arabic.", }
Named Entity Recognition (NER) is a fundamental task in Natural Language Processing (NLP) that focuses on extracting entities such as names of people, organizations, locations, and dates from text. Despite significant advancements due to deep learning and transformer architectures like BERT, NER still faces challenges, particularly in low-resource languages like Arabic. This paper presents a BERT-based NER system that utilizes a two-channel parallel hybrid neural network with an attention mechanism specifically designed for the NER Shared Task 2024. In the competition, our approach ranked second by scoring 90.13{\%} in micro-F1 on the test set. The results demonstrate the effectiveness of combining advanced neural network architectures with contextualized word embeddings in improving NER performance for Arabic.
[ "Yahia, Issam", "Atou, Houdaifa", "Berrada, Ismail" ]
Addax at {W}ojood{NER} 2024: Attention-Based Dual-Channel Neural Network for {A}rabic Named Entity Recognition
arabicnlp-1.103
Poster
2310.16153v1
https://aclanthology.org/2024.arabicnlp-1.104.bib
@inproceedings{hamoud-etal-2024-dru, title = "{DRU} at {W}ojood{NER} 2024: A Multi-level Method Approach", author = "Hamoud, Hadi and Chakra, Chadi and Hamdan, Nancy and Mraikhat, Osama and Albared, Doha and Zaraket, Fadi", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.104", pages = "874--879", abstract = "In this paper, we present our submission for the WojoodNER 2024 Shared Tasks addressing flat and nested sub-tasks (1, 2). We experiment with three different approaches. We train (i) an Arabic fine-tuned version of BLOOMZ-7b-mt, GEMMA-7b, and AraBERTv2 on multi-label token classifications task; (ii) two AraBERTv2 models, on main types and sub-types respectively; and (iii) one model for main types and four for the four sub-types. Based on the Wojood NER 2024 test set results, the three fine-tuned models performed similarly with AraBERTv2 favored (F1: Flat=.8780 Nested=.9040). The five model approach performed slightly better (F1: Flat=.8782 Nested=.9043).", }
In this paper, we present our submission for the WojoodNER 2024 Shared Tasks addressing flat and nested sub-tasks (1, 2). We experiment with three different approaches. We train (i) an Arabic fine-tuned version of BLOOMZ-7b-mt, GEMMA-7b, and AraBERTv2 on multi-label token classifications task; (ii) two AraBERTv2 models, on main types and sub-types respectively; and (iii) one model for main types and four for the four sub-types. Based on the Wojood NER 2024 test set results, the three fine-tuned models performed similarly with AraBERTv2 favored (F1: Flat=.8780 Nested=.9040). The five model approach performed slightly better (F1: Flat=.8782 Nested=.9043).
[ "Hamoud, Hadi", "Chakra, Chadi", "Hamdan, Nancy", "Mraikhat, Osama", "Albared, Doha", "Zaraket, Fadi" ]
{DRU} at {W}ojood{NER} 2024: A Multi-level Method Approach
arabicnlp-1.104
Poster
2310.16153v1
https://aclanthology.org/2024.arabicnlp-1.105.bib
@inproceedings{alshammari-2024-bangor, title = "{B}angor {U}niversity at {W}ojood{NER} 2024: Advancing {A}rabic Named Entity Recognition with {CAM}e{LBERT}-Mix", author = "Alshammari, Norah", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.105", pages = "880--884", abstract = "This paper describes the approach and results of Bangor University{'}s participation in the WojoodNER 2024 shared task, specifically for Subtask-1: Closed-Track Flat Fine-Grain NER. We present a system utilizing a transformer-based model called bert-base-arabic-camelbert-mix, fine-tuned on the Wojood-Fine corpus. A key enhancement to our approach involves adding a linear layer on top of the bert-base-arabic-camelbert-mix to classify each token into one of 51 different entity types and subtypes, as well as the {`}O{'} label for non-entity tokens. This linear layer effectively maps the contextualized embeddings produced by BERT to the desired output labels, addressing the complex challenges of fine-grained Arabic NER. The system achieved competitive results in precision, recall, and F1 scores, thereby contributing significant insights into the application of transformers in Arabic NER tasks.", }
This paper describes the approach and results of Bangor University{'}s participation in the WojoodNER 2024 shared task, specifically for Subtask-1: Closed-Track Flat Fine-Grain NER. We present a system utilizing a transformer-based model called bert-base-arabic-camelbert-mix, fine-tuned on the Wojood-Fine corpus. A key enhancement to our approach involves adding a linear layer on top of the bert-base-arabic-camelbert-mix to classify each token into one of 51 different entity types and subtypes, as well as the {`}O{'} label for non-entity tokens. This linear layer effectively maps the contextualized embeddings produced by BERT to the desired output labels, addressing the complex challenges of fine-grained Arabic NER. The system achieved competitive results in precision, recall, and F1 scores, thereby contributing significant insights into the application of transformers in Arabic NER tasks.
[ "Alshammari, Norah" ]
{B}angor {U}niversity at {W}ojood{NER} 2024: Advancing {A}rabic Named Entity Recognition with {CAM}e{LBERT}-Mix
arabicnlp-1.105
Poster
2310.16153v1
https://aclanthology.org/2024.arabicnlp-1.106.bib
@inproceedings{hamdan-etal-2024-dru, title = "{DRU} at {W}ojood{NER} 2024: {ICL} {LLM} for {A}rabic {NER}", author = "Hamdan, Nancy and Hamoud, Hadi and Chakra, Chadi and Mraikhat, Osama and Albared, Doha and Zaraket, Fadi", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.106", pages = "885--893", abstract = "This paper details our submission to the WojoodNER Shared Task 2024, leveraging in-context learning with large language models for Arabic Named Entity Recognition. We utilized the Command R model, to perform fine-grained NER on the Wojood-Fine corpus. Our primary approach achieved an F1 score of 0.737 and a recall of 0.756. Post-processing the generated predictions to correct format inconsistencies resulted in an increased recall of 0.759, and a similar F1 score of 0.735. A multi-level prompting method and aggregation of outputs resulted in a lower F1 score of 0.637. Our results demonstrate the potential of ICL for Arabic NER while highlighting challenges related to LLM output consistency.", }
This paper details our submission to the WojoodNER Shared Task 2024, leveraging in-context learning with large language models for Arabic Named Entity Recognition. We utilized the Command R model, to perform fine-grained NER on the Wojood-Fine corpus. Our primary approach achieved an F1 score of 0.737 and a recall of 0.756. Post-processing the generated predictions to correct format inconsistencies resulted in an increased recall of 0.759, and a similar F1 score of 0.735. A multi-level prompting method and aggregation of outputs resulted in a lower F1 score of 0.637. Our results demonstrate the potential of ICL for Arabic NER while highlighting challenges related to LLM output consistency.
[ "Hamdan, Nancy", "Hamoud, Hadi", "Chakra, Chadi", "Mraikhat, Osama", "Albared, Doha", "Zaraket, Fadi" ]
{DRU} at {W}ojood{NER} 2024: {ICL} {LLM} for {A}rabic {NER}
arabicnlp-1.106
Poster
2310.16153v1
https://aclanthology.org/2024.arabicnlp-1.107.bib
@inproceedings{abdou-mahmoud-2024-mucai, title = "muc{AI} at {W}ojood{NER} 2024: {A}rabic Named Entity Recognition with Nearest Neighbor Search", author = "Abdou, Ahmed and Mahmoud, Tasneem", editor = "Habash, Nizar and Bouamor, Houda and Eskander, Ramy and Tomeh, Nadi and Abu Farha, Ibrahim and Abdelali, Ahmed and Touileb, Samia and Hamed, Injy and Onaizan, Yaser and Alhafni, Bashar and Antoun, Wissam and Khalifa, Salam and Haddad, Hatem and Zitouni, Imed and AlKhamissi, Badr and Almatham, Rawan and Mrini, Khalil", booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.arabicnlp-1.107", pages = "894--898", abstract = "Named Entity Recognition (NER) is a task in Natural Language Processing (NLP) that aims to identify and classify entities in text into predefined categories.However, when applied to Arabic data, NER encounters unique challenges stemming from the language{'}s rich morphological inflections, absence of capitalization cues, and spelling variants, where a single word can comprise multiple morphemes.In this paper, we introduce Arabic KNN-NER, our submission to the Wojood NER Shared Task 2024 (ArabicNLP 2024). We have participated in the shared sub-task 1 Flat NER. In this shared sub-task, we tackle fine-grained flat-entity recognition for Arabic text, where we identify a single main entity and possibly zero or multiple sub-entities for each word.Arabic KNN-NER augments the probability distribution of a fine-tuned model with another label probability distribution derived from performing a KNN search over the cached training data. Our submission achieved 91{\%} on the test set on the WojoodFine dataset, placing Arabic KNN-NER on top of the leaderboard for the shared task.", }
Named Entity Recognition (NER) is a task in Natural Language Processing (NLP) that aims to identify and classify entities in text into predefined categories.However, when applied to Arabic data, NER encounters unique challenges stemming from the language{'}s rich morphological inflections, absence of capitalization cues, and spelling variants, where a single word can comprise multiple morphemes.In this paper, we introduce Arabic KNN-NER, our submission to the Wojood NER Shared Task 2024 (ArabicNLP 2024). We have participated in the shared sub-task 1 Flat NER. In this shared sub-task, we tackle fine-grained flat-entity recognition for Arabic text, where we identify a single main entity and possibly zero or multiple sub-entities for each word.Arabic KNN-NER augments the probability distribution of a fine-tuned model with another label probability distribution derived from performing a KNN search over the cached training data. Our submission achieved 91{\%} on the test set on the WojoodFine dataset, placing Arabic KNN-NER on top of the leaderboard for the shared task.
[ "Abdou, Ahmed", "Mahmoud, Tasneem" ]
muc{AI} at {W}ojood{NER} 2024: {A}rabic Named Entity Recognition with Nearest Neighbor Search
arabicnlp-1.107
Poster
2408.03652v1
https://aclanthology.org/2024.argmining-1.1.bib
@inproceedings{gemechu-etal-2024-aries, title = "{ARIES}: A General Benchmark for Argument Relation Identification", author = "Gemechu, Debela and Ruiz-Dolz, Ramon and Reed, Chris", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.1", pages = "1--14", abstract = "Measuring advances in argument mining is one of the main challenges in the area. Different theories of argument, heterogeneous annotations, and a varied set of argumentation domains make it difficult to contextualise and understand the results reported in different work from a general perspective. In this paper, we present ARIES, a general benchmark for Argument Relation Identification aimed at providing with a standard evaluation for argument mining research. ARIES covers the three different language modelling approaches: sequence and token modelling, and sequence-to-sequence-to-sequence alignment, together with the three main Transformer-based model architectures: encoder-only, decoder-only, and encoder-decoder. Furthermore, the benchmark consists of eight different argument mining datasets, covering the most common argumentation domains, and standardised with the same annotation structures. This paper provides a first comprehensive and comparative set of results in argument mining across a broad range of configurations to compare with, both advancing the state-of-the-art, and establishing a standard way to measure future advances in the area. Across varied task setups and architectures, our experiments reveal consistent challenges in cross-dataset evaluation, with notably poor results. Given the models{'} struggle to acquire transferable skills, the task remains challenging, opening avenues for future research.", }
Measuring advances in argument mining is one of the main challenges in the area. Different theories of argument, heterogeneous annotations, and a varied set of argumentation domains make it difficult to contextualise and understand the results reported in different work from a general perspective. In this paper, we present ARIES, a general benchmark for Argument Relation Identification aimed at providing with a standard evaluation for argument mining research. ARIES covers the three different language modelling approaches: sequence and token modelling, and sequence-to-sequence-to-sequence alignment, together with the three main Transformer-based model architectures: encoder-only, decoder-only, and encoder-decoder. Furthermore, the benchmark consists of eight different argument mining datasets, covering the most common argumentation domains, and standardised with the same annotation structures. This paper provides a first comprehensive and comparative set of results in argument mining across a broad range of configurations to compare with, both advancing the state-of-the-art, and establishing a standard way to measure future advances in the area. Across varied task setups and architectures, our experiments reveal consistent challenges in cross-dataset evaluation, with notably poor results. Given the models{'} struggle to acquire transferable skills, the task remains challenging, opening avenues for future research.
[ "Gemechu, Debela", "Ruiz-Dolz, Ramon", "Reed, Chris" ]
{ARIES}: A General Benchmark for Argument Relation Identification
argmining-1.1
Poster
2402.14458v1
https://aclanthology.org/2024.argmining-1.2.bib
@inproceedings{freedman-toni-2024-detecting, title = "Detecting Scientific Fraud Using Argument Mining", author = "Freedman, Gabriel and Toni, Francesca", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.2", pages = "15--28", abstract = "A proliferation of fraudulent scientific research in recent years has precipitated a greater interest in more effective methods of detection. There are many varieties of academic fraud, but a particularly challenging type to detect is the use of paper mills and the faking of peer-review. To the best of our knowledge, there have so far been no attempts to automate this process.The complexity of this issue precludes the use of heuristic methods, like pattern-matching techniques, which are employed for other types of fraud. Our proposed method in this paper uses techniques from the Computational Argumentation literature (i.e. argument mining and argument quality evaluation). Our central hypothesis stems from the assumption that articles that have not been subject to the proper level of scrutiny will contain poorly formed and reasoned arguments, relative to legitimately published papers. We use a variety of corpora to test this approach, including a collection of abstracts taken from retracted papers. We show significant improvement compared to a number of baselines, suggesting that this approach merits further investigation.", }
A proliferation of fraudulent scientific research in recent years has precipitated a greater interest in more effective methods of detection. There are many varieties of academic fraud, but a particularly challenging type to detect is the use of paper mills and the faking of peer-review. To the best of our knowledge, there have so far been no attempts to automate this process.The complexity of this issue precludes the use of heuristic methods, like pattern-matching techniques, which are employed for other types of fraud. Our proposed method in this paper uses techniques from the Computational Argumentation literature (i.e. argument mining and argument quality evaluation). Our central hypothesis stems from the assumption that articles that have not been subject to the proper level of scrutiny will contain poorly formed and reasoned arguments, relative to legitimately published papers. We use a variety of corpora to test this approach, including a collection of abstracts taken from retracted papers. We show significant improvement compared to a number of baselines, suggesting that this approach merits further investigation.
[ "Freedman, Gabriel", "Toni, Francesca" ]
Detecting Scientific Fraud Using Argument Mining
argmining-1.2
Poster
1309.3944v1
https://aclanthology.org/2024.argmining-1.3.bib
@inproceedings{bondarenko-etal-2024-deepct, title = "{D}eep{CT}-enhanced Lexical Argument Retrieval", author = {Bondarenko, Alexander and Fr{\"o}be, Maik and Hollatz, Danik and Merker, Jan and Hagen, Matthias}, editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.3", pages = "29--35", abstract = "The recent Touch{\'e} lab{'}s argument retrieval task focuses on controversial topics like {`}Should bottled water be banned?{'} and asks to retrieve relevant pro/con arguments. Interestingly, the most effective systems submitted to that task still are based on lexical retrieval models like BM25. In other domains, neural retrievers that capture semantics are more effective than lexical baselines. To add more {``}semantics{''} to argument retrieval, we propose to combine lexical models with DeepCT-based document term weights. Our evaluation shows that our approach is more effective than all the systems submitted to the Touch{\'e} lab while being on par with modern neural re-rankers that themselves are computationally more expensive.", }
The recent Touch{\'e} lab{'}s argument retrieval task focuses on controversial topics like {`}Should bottled water be banned?{'} and asks to retrieve relevant pro/con arguments. Interestingly, the most effective systems submitted to that task still are based on lexical retrieval models like BM25. In other domains, neural retrievers that capture semantics are more effective than lexical baselines. To add more {``}semantics{''} to argument retrieval, we propose to combine lexical models with DeepCT-based document term weights. Our evaluation shows that our approach is more effective than all the systems submitted to the Touch{\'e} lab while being on par with modern neural re-rankers that themselves are computationally more expensive.
[ "Bondarenko, Alex", "er", "Fr{\\\"o}be, Maik", "Hollatz, Danik", "Merker, Jan", "Hagen, Matthias" ]
{D}eep{CT}-enhanced Lexical Argument Retrieval
argmining-1.3
Poster
1512.00578v1
https://aclanthology.org/2024.argmining-1.4.bib
@inproceedings{mezza-etal-2024-exploiting, title = "Exploiting Dialogue Acts and Context to Identify Argumentative Relations in Online Debates", author = "Mezza, Stefano and Wobcke, Wayne and Blair, Alan", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.4", pages = "36--45", abstract = "Argumentative Relation Classification is the task of determining the relationship between two contributions in the context of an argumentative dialogue. Existing models in the literature rely on a combination of lexical features and pre-trained language models to tackle this task; while this approach is somewhat effective, it fails to take into account the importance of pragmatic features such as the illocutionary force of the argument or the structure of previous utterances in the discussion; relying solely on lexical features also produces models that over-fit their initial training set and do not scale to unseen domains. In this work, we introduce ArguNet, a new model for Argumentative Relation Classification which relies on a combination of Dialogue Acts and Dialogue Context to improve the representation of argument structures in opinionated dialogues. We show that our model achieves state-of-the-art results on the Kialo benchmark test set, and provide evidence of its robustness in an open-domain scenario.", }
Argumentative Relation Classification is the task of determining the relationship between two contributions in the context of an argumentative dialogue. Existing models in the literature rely on a combination of lexical features and pre-trained language models to tackle this task; while this approach is somewhat effective, it fails to take into account the importance of pragmatic features such as the illocutionary force of the argument or the structure of previous utterances in the discussion; relying solely on lexical features also produces models that over-fit their initial training set and do not scale to unseen domains. In this work, we introduce ArguNet, a new model for Argumentative Relation Classification which relies on a combination of Dialogue Acts and Dialogue Context to improve the representation of argument structures in opinionated dialogues. We show that our model achieves state-of-the-art results on the Kialo benchmark test set, and provide evidence of its robustness in an open-domain scenario.
[ "Mezza, Stefano", "Wobcke, Wayne", "Blair, Alan" ]
Exploiting Dialogue Acts and Context to Identify Argumentative Relations in Online Debates
argmining-1.4
Poster
2103.16387v1
https://aclanthology.org/2024.argmining-1.5.bib
@inproceedings{farzam-etal-2024-multi, title = "Multi-Task Learning Improves Performance in Deep Argument Mining Models", author = "Farzam, Amirhossein and Shekhar, Shashank and Mehlhaff, Isaac and Morucci, Marco", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.5", pages = "46--58", abstract = "The successful analysis of argumentative techniques in user-generated text is central to many downstream tasks such as political and market analysis. Recent argument mining tools use state-of-the-art deep learning methods to extract and annotate argumentative techniques from various online text corpora, but each task is treated as separate and different bespoke models are fine-tuned for each dataset. We show that different argument mining tasks share common semantic and logical structure by implementing a multi-task approach to argument mining that meets or exceeds performance from existing methods for the same problems. Our model builds a shared representation of the input and exploits similarities between tasks in order to further boost performance via parameter-sharing. Our results are important for argument mining as they show that different tasks share substantial similarities and suggest a holistic approach to the extraction of argumentative techniques from text.", }
The successful analysis of argumentative techniques in user-generated text is central to many downstream tasks such as political and market analysis. Recent argument mining tools use state-of-the-art deep learning methods to extract and annotate argumentative techniques from various online text corpora, but each task is treated as separate and different bespoke models are fine-tuned for each dataset. We show that different argument mining tasks share common semantic and logical structure by implementing a multi-task approach to argument mining that meets or exceeds performance from existing methods for the same problems. Our model builds a shared representation of the input and exploits similarities between tasks in order to further boost performance via parameter-sharing. Our results are important for argument mining as they show that different tasks share substantial similarities and suggest a holistic approach to the extraction of argumentative techniques from text.
[ "Farzam, Amirhossein", "Shekhar, Shashank", "Mehlhaff, Isaac", "Morucci, Marco" ]
Multi-Task Learning Improves Performance in Deep Argument Mining Models
argmining-1.5
Poster
2307.01401v1
https://aclanthology.org/2024.argmining-1.6.bib
@inproceedings{ye-teufel-2024-computational, title = "Computational Modelling of Undercuts in Real-world Arguments", author = "Ye, Yuxiao and Teufel, Simone", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.6", pages = "59--68", abstract = "Argument Mining (AM) is the task of automatically analysing arguments, such that the unstructured information contained in them is converted into structured representations. Undercut is a unique structure in arguments, as it challenges the relationship between a premise and a claim, unlike direct attacks which challenge the claim or the premise itself. Undercut is also an important counterargument device as it often reflects the value of arguers. However, undercuts have not received the attention in the filed of AM they should have {---} there is neither much corpus data about undercuts, nor an existing AM model that can automatically recognise them. In this paper, we present a real-world dataset of arguments with explicitly annotated undercuts, and the first computational model that is able to recognise them. The dataset consists of 400 arguments, containing 326 undercuts. On this dataset, our approach beats a strong baseline in undercut recognition, with $F_1 = 38.8\%$, which is comparable to the performance on recognising direct attacks. We also conduct experiments on a benchmark dataset containing no undercuts, and prove that our approach is as good as the state of the art in terms of recognising the overall structure of arguments. Our work pioneers the systematic analysis and computational modelling of undercuts in real-world arguments, setting a foundation for future research in the role of undercuts in the dynamics of argumentation.", }
Argument Mining (AM) is the task of automatically analysing arguments, such that the unstructured information contained in them is converted into structured representations. Undercut is a unique structure in arguments, as it challenges the relationship between a premise and a claim, unlike direct attacks which challenge the claim or the premise itself. Undercut is also an important counterargument device as it often reflects the value of arguers. However, undercuts have not received the attention in the filed of AM they should have {---} there is neither much corpus data about undercuts, nor an existing AM model that can automatically recognise them. In this paper, we present a real-world dataset of arguments with explicitly annotated undercuts, and the first computational model that is able to recognise them. The dataset consists of 400 arguments, containing 326 undercuts. On this dataset, our approach beats a strong baseline in undercut recognition, with $F_1 = 38.8\%$, which is comparable to the performance on recognising direct attacks. We also conduct experiments on a benchmark dataset containing no undercuts, and prove that our approach is as good as the state of the art in terms of recognising the overall structure of arguments. Our work pioneers the systematic analysis and computational modelling of undercuts in real-world arguments, setting a foundation for future research in the role of undercuts in the dynamics of argumentation.
[ "Ye, Yuxiao", "Teufel, Simone" ]
Computational Modelling of Undercuts in Real-world Arguments
argmining-1.6
Poster
2007.11480v4
https://aclanthology.org/2024.argmining-1.7.bib
@inproceedings{mancini-etal-2024-mamkit, title = "{MAMK}it: A Comprehensive Multimodal Argument Mining Toolkit", author = "Mancini, Eleonora and Ruggeri, Federico and Colamonaco, Stefano and Zecca, Andrea and Marro, Samuele and Torroni, Paolo", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.7", pages = "69--82", abstract = "Multimodal Argument Mining (MAM) is a recent area of research aiming to extend argument analysis and improve discourse understanding by incorporating multiple modalities. Initial results confirm the importance of paralinguistic cues in this field. However, the research community still lacks a comprehensive platform where results can be easily reproduced, and methods and models can be stored, compared, and tested against a variety of benchmarks. To address these challenges, we propose MAMKit, an open, publicly available, PyTorch toolkit that consolidates datasets and models, providing a standardized platform for experimentation. MAMKit also includes some new baselines, designed to stimulate research on text and audio encoding and fusion for MAM tasks. Our initial results with MAMKit indicate that advancements in MAM require novel annotation processes to encompass auditory cues effectively.", }
Multimodal Argument Mining (MAM) is a recent area of research aiming to extend argument analysis and improve discourse understanding by incorporating multiple modalities. Initial results confirm the importance of paralinguistic cues in this field. However, the research community still lacks a comprehensive platform where results can be easily reproduced, and methods and models can be stored, compared, and tested against a variety of benchmarks. To address these challenges, we propose MAMKit, an open, publicly available, PyTorch toolkit that consolidates datasets and models, providing a standardized platform for experimentation. MAMKit also includes some new baselines, designed to stimulate research on text and audio encoding and fusion for MAM tasks. Our initial results with MAMKit indicate that advancements in MAM require novel annotation processes to encompass auditory cues effectively.
[ "Mancini, Eleonora", "Ruggeri, Federico", "Colamonaco, Stefano", "Zecca, Andrea", "Marro, Samuele", "Torroni, Paolo" ]
{MAMK}it: A Comprehensive Multimodal Argument Mining Toolkit
argmining-1.7
Poster
1903.09525v3
https://aclanthology.org/2024.argmining-1.8.bib
@inproceedings{ruiz-dolz-etal-2024-overview, title = "Overview of {D}ial{AM}-2024: Argument Mining in Natural Language Dialogues", author = "Ruiz-Dolz, Ramon and Lawrence, John and Schad, Ella and Reed, Chris", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.8", pages = "83--92", abstract = "Argumentation is the process by which humans rationally elaborate their thoughts and opinions in written (e.g., essays) or spoken (e.g., debates) contexts. Argument Mining research, however, has been focused on either written argumentation or spoken argumentation but without considering any additional information, e.g., speech acts and intentions. In this paper, we present an overview of DialAM-2024, the first shared task in dialogical argument mining, where argumentative relations and speech illocutions are modelled together in a unified framework. The task was divided into two different sub-tasks: the identification of propositional relations and the identification of illocutionary relations. Six different teams explored different methodologies to leverage both sources of information to reconstruct argument maps containing the locutions uttered in the speeches and the argumentative propositions implicit in them. The best performing team achieved an F1-score of 67.05{\%} in the overall evaluation of the reconstruction of complete argument maps, considering both sub-tasks included in the DialAM-2024 shared task.", }
Argumentation is the process by which humans rationally elaborate their thoughts and opinions in written (e.g., essays) or spoken (e.g., debates) contexts. Argument Mining research, however, has been focused on either written argumentation or spoken argumentation but without considering any additional information, e.g., speech acts and intentions. In this paper, we present an overview of DialAM-2024, the first shared task in dialogical argument mining, where argumentative relations and speech illocutions are modelled together in a unified framework. The task was divided into two different sub-tasks: the identification of propositional relations and the identification of illocutionary relations. Six different teams explored different methodologies to leverage both sources of information to reconstruct argument maps containing the locutions uttered in the speeches and the argumentative propositions implicit in them. The best performing team achieved an F1-score of 67.05{\%} in the overall evaluation of the reconstruction of complete argument maps, considering both sub-tasks included in the DialAM-2024 shared task.
[ "Ruiz-Dolz, Ramon", "Lawrence, John", "Schad, Ella", "Reed, Chris" ]
Overview of {D}ial{AM}-2024: Argument Mining in Natural Language Dialogues
argmining-1.8
Poster
2212.12652v1
https://aclanthology.org/2024.argmining-1.9.bib
@inproceedings{binder-etal-2024-dfki, title = "{DFKI}-{MLST} at {D}ial{AM}-2024 Shared Task: System Description", author = "Binder, Arne and Anikina, Tatiana and Hennig, Leonhard and Ostermann, Simon", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.9", pages = "93--102", abstract = "This paper presents the dfki-mlst submission for the DialAM shared task (Ruiz-Dolz et al., 2024) on identification of argumentative and illocutionary relations in dialogue. Our model achieves best results in the global setting: 48.25 F1 at the focused level when looking only at the related arguments/locutions and 67.05 F1 at the general level when evaluating the complete argument maps. We describe our implementation of the data pre-processing, relation encoding and classification, evaluating 11 different base models and performing experiments with, e.g., node text combination and data augmentation. Our source code is publicly available.", }
This paper presents the dfki-mlst submission for the DialAM shared task (Ruiz-Dolz et al., 2024) on identification of argumentative and illocutionary relations in dialogue. Our model achieves best results in the global setting: 48.25 F1 at the focused level when looking only at the related arguments/locutions and 67.05 F1 at the general level when evaluating the complete argument maps. We describe our implementation of the data pre-processing, relation encoding and classification, evaluating 11 different base models and performing experiments with, e.g., node text combination and data augmentation. Our source code is publicly available.
[ "Binder, Arne", "Anikina, Tatiana", "Hennig, Leonhard", "Ostermann, Simon" ]
{DFKI}-{MLST} at {D}ial{AM}-2024 Shared Task: System Description
argmining-1.9
Poster
2407.19740v1
https://aclanthology.org/2024.argmining-1.10.bib
@inproceedings{wu-etal-2024-knowcomp, title = "{K}now{C}omp at {D}ial{AM}-2024: Fine-tuning Pre-trained Language Models for Dialogical Argument Mining with Inference Anchoring Theory", author = "Wu, Yuetong and Zhou, Yukai and Xu, Baixuan and Wang, Weiqi and Song, Yangqiu", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.10", pages = "103--109", abstract = "In this paper, we present our framework for DialAM-2024 TaskA: Identification of Propositional Relations and TaskB: Identification of Illocutionary Relations. The goal of task A is to detect argumentative relations between propositions in an argumentative dialogue. i.e., Inference, Conflict, Rephrase while task B aims to detect illocutionary relations between locutions and argumentative propositions in a dialogue. e.g., Asserting, Agreeing, Arguing, Disagreeing. Noticing the definition of the relations are strict and professional under the context of IAT framework, we meticulously curate prompts which not only incorporate formal definition of the relations, but also exhibit the subtle differences between them. The PTLMs are then fine-tuned on the human-designed prompts to enhance its discrimination capability in classifying different theoretical relations by learning from the human instruction and the ground truth samples. After extensive experiments, a fine-tuned DeBERTa-v3-base model exhibits the best performance among all PTLMs with an F1 score of 78.90{\%} on Task B. It is worth noticing that our framework ranks {\#}2 in the ILO - General official leaderboard.", }
In this paper, we present our framework for DialAM-2024 TaskA: Identification of Propositional Relations and TaskB: Identification of Illocutionary Relations. The goal of task A is to detect argumentative relations between propositions in an argumentative dialogue. i.e., Inference, Conflict, Rephrase while task B aims to detect illocutionary relations between locutions and argumentative propositions in a dialogue. e.g., Asserting, Agreeing, Arguing, Disagreeing. Noticing the definition of the relations are strict and professional under the context of IAT framework, we meticulously curate prompts which not only incorporate formal definition of the relations, but also exhibit the subtle differences between them. The PTLMs are then fine-tuned on the human-designed prompts to enhance its discrimination capability in classifying different theoretical relations by learning from the human instruction and the ground truth samples. After extensive experiments, a fine-tuned DeBERTa-v3-base model exhibits the best performance among all PTLMs with an F1 score of 78.90{\%} on Task B. It is worth noticing that our framework ranks {\#}2 in the ILO - General official leaderboard.
[ "Wu, Yuetong", "Zhou, Yukai", "Xu, Baixuan", "Wang, Weiqi", "Song, Yangqiu" ]
{K}now{C}omp at {D}ial{AM}-2024: Fine-tuning Pre-trained Language Models for Dialogical Argument Mining with Inference Anchoring Theory
argmining-1.10
Poster
2407.19740v1
https://aclanthology.org/2024.argmining-1.11.bib
@inproceedings{zheng-etal-2024-knowcomp, title = "{KNOWCOMP} {POKEMON} Team at {D}ial{AM}-2024: A Two-Stage Pipeline for Detecting Relations in Dialogue Argument Mining", author = "Zheng, Zihao and Wang, Zhaowei and Zong, Qing and Song, Yangqiu", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.11", pages = "110--118", abstract = "Dialogue Argument Mining(DialAM) is an important branch of Argument Mining(AM). DialAM-2024 is a shared task focusing on dialogue argument mining, which requires us to identify argumentative relations and illocutionary relations among proposition nodes and locution nodes. To accomplish this, we propose a two-stage pipeline, which includes the Two-Step S-Node Prediction Model in Stage 1 and the YA-Node Prediction Model in Stage 2. We also augment the training data in both stages and introduce context in the prediction of Stage 2. We successfully completed the task and achieved good results. Our team KNOWCOMP POKEMON ranked 1st in the ARI Focused score and 4th in the Global Focused score.", }
Dialogue Argument Mining(DialAM) is an important branch of Argument Mining(AM). DialAM-2024 is a shared task focusing on dialogue argument mining, which requires us to identify argumentative relations and illocutionary relations among proposition nodes and locution nodes. To accomplish this, we propose a two-stage pipeline, which includes the Two-Step S-Node Prediction Model in Stage 1 and the YA-Node Prediction Model in Stage 2. We also augment the training data in both stages and introduce context in the prediction of Stage 2. We successfully completed the task and achieved good results. Our team KNOWCOMP POKEMON ranked 1st in the ARI Focused score and 4th in the Global Focused score.
[ "Zheng, Zihao", "Wang, Zhaowei", "Zong, Qing", "Song, Yangqiu" ]
{KNOWCOMP} {POKEMON} Team at {D}ial{AM}-2024: A Two-Stage Pipeline for Detecting Relations in Dialogue Argument Mining
argmining-1.11
Poster
2407.19740v1
https://aclanthology.org/2024.argmining-1.12.bib
@inproceedings{chaixanien-etal-2024-pungene, title = "Pungene at {D}ial{AM}-2024: Identification of Propositional and Illocutionary Relations", author = "Chaixanien, Sirawut and Choi, Eugene and Shaar, Shaden and Cardie, Claire", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.12", pages = "119--123", abstract = "In this paper we tackle the shared task DialAM-2024 aiming to annotate dialogue based on the inference anchoring theory (IAT). The task can be split into two parts, identification of propositional relations and identification of illocutionary relations. We propose a pipelined system made up of three parts: (1) locutionary-propositions relation detection, (2) propositional relations detection, and (3) illocutionary relations identification. We fine-tune models independently for each step, and combine at the end for the final system. Our proposed system ranks second overall compared to other participants in the shared task, scoring an average f1-score on both sub-parts of 63.7.", }
In this paper we tackle the shared task DialAM-2024 aiming to annotate dialogue based on the inference anchoring theory (IAT). The task can be split into two parts, identification of propositional relations and identification of illocutionary relations. We propose a pipelined system made up of three parts: (1) locutionary-propositions relation detection, (2) propositional relations detection, and (3) illocutionary relations identification. We fine-tune models independently for each step, and combine at the end for the final system. Our proposed system ranks second overall compared to other participants in the shared task, scoring an average f1-score on both sub-parts of 63.7.
[ "Chaixanien, Sirawut", "Choi, Eugene", "Shaar, Shaden", "Cardie, Claire" ]
Pungene at {D}ial{AM}-2024: Identification of Propositional and Illocutionary Relations
argmining-1.12
Poster
2407.19740v1
https://aclanthology.org/2024.argmining-1.13.bib
@inproceedings{saha-srihari-2024-turiya, title = "Turiya at {D}ial{AM}-2024: Inference Anchoring Theory Based {LLM} Parsers", author = "Saha, Sougata and Srihari, Rohini", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.13", pages = "124--129", abstract = "Representing discourse as argument graphs facilitates robust analysis. Although computational frameworks for constructing graphs from monologues exist, there is a lack of frameworks for parsing dialogue. Inference Anchoring Theory (IAT) is a theoretical framework for extracting graphical argument structures and relationships from dialogues. Here, we introduce computational models for implementing the IAT framework for parsing dialogues. We experiment with a classification-based biaffine parser and Large Language Model (LLM)-based generative methods and compare them. Our results demonstrate the utility of finetuning LLMs for constructing IAT-based argument graphs from dialogues, which is a nuanced task.", }
Representing discourse as argument graphs facilitates robust analysis. Although computational frameworks for constructing graphs from monologues exist, there is a lack of frameworks for parsing dialogue. Inference Anchoring Theory (IAT) is a theoretical framework for extracting graphical argument structures and relationships from dialogues. Here, we introduce computational models for implementing the IAT framework for parsing dialogues. We experiment with a classification-based biaffine parser and Large Language Model (LLM)-based generative methods and compare them. Our results demonstrate the utility of finetuning LLMs for constructing IAT-based argument graphs from dialogues, which is a nuanced task.
[ "Saha, Sougata", "Srihari, Rohini" ]
Turiya at {D}ial{AM}-2024: Inference Anchoring Theory Based {LLM} Parsers
argmining-1.13
Poster
2402.07616v3
https://aclanthology.org/2024.argmining-1.14.bib
@inproceedings{falk-etal-2024-overview, title = "Overview of {P}erpective{A}rg2024 The First Shared Task on Perspective Argument Retrieval", author = "Falk, Neele and Waldis, Andreas and Gurevych, Iryna", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.14", pages = "130--149", abstract = "Argument retrieval is the task of finding relevant arguments for a given query. While existing approaches rely solely on the semantic alignment of queries and arguments, this first shared task on perspective argument retrieval incorporates perspectives during retrieval, ac- counting for latent influences in argumenta- tion. We present a novel multilingual dataset covering demographic and socio-cultural (so- cio) variables, such as age, gender, and politi- cal attitude, representing minority and major- ity groups in society. We distinguish between three scenarios to explore how retrieval systems consider explicitly (in both query and corpus) and implicitly (only in query) formulated per- spectives. This paper provides an overview of this shared task and summarizes the results of the six submitted systems. We find substantial challenges in incorporating perspectivism, especially when aiming for personalization based solely on the text of arguments without explicitly providing socio profiles. Moreover, re- trieval systems tend to be biased towards the majority group but partially mitigate bias for the female gender. While we bootstrap per- spective argument retrieval, further research is essential to optimize retrieval systems to facilitate personalization and reduce polarization.", }
Argument retrieval is the task of finding relevant arguments for a given query. While existing approaches rely solely on the semantic alignment of queries and arguments, this first shared task on perspective argument retrieval incorporates perspectives during retrieval, ac- counting for latent influences in argumenta- tion. We present a novel multilingual dataset covering demographic and socio-cultural (so- cio) variables, such as age, gender, and politi- cal attitude, representing minority and major- ity groups in society. We distinguish between three scenarios to explore how retrieval systems consider explicitly (in both query and corpus) and implicitly (only in query) formulated per- spectives. This paper provides an overview of this shared task and summarizes the results of the six submitted systems. We find substantial challenges in incorporating perspectivism, especially when aiming for personalization based solely on the text of arguments without explicitly providing socio profiles. Moreover, re- trieval systems tend to be biased towards the majority group but partially mitigate bias for the female gender. While we bootstrap per- spective argument retrieval, further research is essential to optimize retrieval systems to facilitate personalization and reduce polarization.
[ "Falk, Neele", "Waldis, Andreas", "Gurevych, Iryna" ]
Overview of {P}erpective{A}rg2024 The First Shared Task on Perspective Argument Retrieval
argmining-1.14
Poster
2407.19670v1
https://aclanthology.org/2024.argmining-1.15.bib
@inproceedings{gunzler-etal-2024-sovereign, title = {S{\"o}vereign at The Perspective Argument Retrieval Shared Task 2024: Using {LLM}s with Argument Mining}, author = {G{\"u}nzler, Robert and Sevgili, {\"O}zge and Remus, Steffen and Biemann, Chris and Nikishina, Irina}, editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.15", pages = "150--158", abstract = {This paper presents the S{\"o}vereign submission for the shared task on perspective argument retrieval for the Argument Mining Workshop 2024. The main challenge is to perform argument retrieval considering socio-cultural aspects such as political interests, occupation, age, and gender. To address the challenge, we apply open-access Large Language Models (Mistral-7b) in a zero-shot fashion for re-ranking and explicit similarity scoring. Additionally, we combine different features in an ensemble setup using logistic regression. Our system ranks second in the competition for all test set rounds on average for the logistic regression approach using LLM similarity scores as a feature. In addition to the description of the approach, we also provide further results of our ablation study. Our code will be open-sourced upon acceptance.}, }
This paper presents the S{\"o}vereign submission for the shared task on perspective argument retrieval for the Argument Mining Workshop 2024. The main challenge is to perform argument retrieval considering socio-cultural aspects such as political interests, occupation, age, and gender. To address the challenge, we apply open-access Large Language Models (Mistral-7b) in a zero-shot fashion for re-ranking and explicit similarity scoring. Additionally, we combine different features in an ensemble setup using logistic regression. Our system ranks second in the competition for all test set rounds on average for the logistic regression approach using LLM similarity scores as a feature. In addition to the description of the approach, we also provide further results of our ablation study. Our code will be open-sourced upon acceptance.
[ "G{\\\"u}nzler, Robert", "Sevgili, {\\\"O}zge", "Remus, Steffen", "Biemann, Chris", "Nikishina, Irina" ]
S{\"o}vereign at The Perspective Argument Retrieval Shared Task 2024: Using {LLM}s with Argument Mining
argmining-1.15
Poster
2407.19670v1
https://aclanthology.org/2024.argmining-1.16.bib
@inproceedings{saha-srihari-2024-turiya-perpectivearg2024, title = "Turiya at {P}erpective{A}rg2024: A Multilingual Argument Retriever and Reranker", author = "Saha, Sougata and Srihari, Rohini", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.16", pages = "159--163", abstract = "While general argument retrieval systems have significantly matured, multilingual argument retrieval in a socio-cultural setting is an overlooked problem. Advancements in such systems are imperative to enhance the inclusivity of society. The Perspective Argument Retrieval (PAR) task addresses these aspects and acknowledges their potential latent influence on argumentation. Here, we present a multilingual retrieval system for PAR that accounts for societal diversity during retrieval. Our approach couples a retriever and a re-ranker and spans multiple languages, thus factoring in diverse socio-cultural settings. The performance of our end-to-end system on three distinct test sets testify to its robustness.", }
While general argument retrieval systems have significantly matured, multilingual argument retrieval in a socio-cultural setting is an overlooked problem. Advancements in such systems are imperative to enhance the inclusivity of society. The Perspective Argument Retrieval (PAR) task addresses these aspects and acknowledges their potential latent influence on argumentation. Here, we present a multilingual retrieval system for PAR that accounts for societal diversity during retrieval. Our approach couples a retriever and a re-ranker and spans multiple languages, thus factoring in diverse socio-cultural settings. The performance of our end-to-end system on three distinct test sets testify to its robustness.
[ "Saha, Sougata", "Srihari, Rohini" ]
Turiya at {P}erpective{A}rg2024: A Multilingual Argument Retriever and Reranker
argmining-1.16
Poster
2204.02292v2
https://aclanthology.org/2024.argmining-1.17.bib
@inproceedings{zhang-braun-2024-twente, title = "Twente-{BMS}-{NLP} at {P}erspective{A}rg 2024: Combining Bi-Encoder and Cross-Encoder for Argument Retrieval", author = "Zhang, Leixin and Braun, Daniel", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.17", pages = "164--168", abstract = "The paper describes our system for the Perspective Argument Retrieval Shared Task. The shared task consists of three scenarios in which relevant political arguments have to be retrieved based on queries (Scenario 1). In Scenario 2 explicit socio-cultural properties are provided and in Scenario 3 implicit socio-cultural properties within the arguments have to be used. We combined a Bi-Encoder and a Cross-Encoder to retrieve relevant arguments for each query. For the third scenario, we extracted linguistic features to predict socio-demographic labels as a separate task. However, the socio-demographic match task proved challenging due to the constraints of argument lengths and genres. The described system won both tracks of the shared task.", }
The paper describes our system for the Perspective Argument Retrieval Shared Task. The shared task consists of three scenarios in which relevant political arguments have to be retrieved based on queries (Scenario 1). In Scenario 2 explicit socio-cultural properties are provided and in Scenario 3 implicit socio-cultural properties within the arguments have to be used. We combined a Bi-Encoder and a Cross-Encoder to retrieve relevant arguments for each query. For the third scenario, we extracted linguistic features to predict socio-demographic labels as a separate task. However, the socio-demographic match task proved challenging due to the constraints of argument lengths and genres. The described system won both tracks of the shared task.
[ "Zhang, Leixin", "Braun, Daniel" ]
Twente-{BMS}-{NLP} at {P}erspective{A}rg 2024: Combining Bi-Encoder and Cross-Encoder for Argument Retrieval
argmining-1.17
Poster
2108.10442v2
https://aclanthology.org/2024.argmining-1.18.bib
@inproceedings{maurer-etal-2024-gesis, title = "{GESIS}-{DSM} at {P}erpective{A}rg2024: A Matter of Style? Socio-Cultural Differences in Argumentation", author = "Maurer, Maximilian and Romberg, Julia and Reuver, Myrthe and Weldekiros, Negash and Lapesa, Gabriella", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.18", pages = "169--181", abstract = "This paper describes the contribution of team GESIS-DSM to the Perspective Argument Retrieval Task, a task on retrieving socio-culturally relevant and diverse arguments for different user queries. Our experiments and analyses aim to explore the nature of the socio-cultural specialization in argument retrieval: (how) do the arguments written by different socio-cultural groups differ? We investigate the impact of content and style for the task of identifying arguments relevant to a query and a certain demographic attribute. In its different configurations, our system employs sentence embedding representations, arguments generated with Large Language Model, as well as stylistic features. final method places third overall in the shared task, and, in comparison, does particularly well in the most difficult evaluation scenario, where the socio-cultural background of the argument author is implicit (i.e. has to be inferred from the text). This result indicates that socio-cultural differences in argument production may indeed be a matter of style.", }
This paper describes the contribution of team GESIS-DSM to the Perspective Argument Retrieval Task, a task on retrieving socio-culturally relevant and diverse arguments for different user queries. Our experiments and analyses aim to explore the nature of the socio-cultural specialization in argument retrieval: (how) do the arguments written by different socio-cultural groups differ? We investigate the impact of content and style for the task of identifying arguments relevant to a query and a certain demographic attribute. In its different configurations, our system employs sentence embedding representations, arguments generated with Large Language Model, as well as stylistic features. final method places third overall in the shared task, and, in comparison, does particularly well in the most difficult evaluation scenario, where the socio-cultural background of the argument author is implicit (i.e. has to be inferred from the text). This result indicates that socio-cultural differences in argument production may indeed be a matter of style.
[ "Maurer, Maximilian", "Romberg, Julia", "Reuver, Myrthe", "Weldekiros, Negash", "Lapesa, Gabriella" ]
{GESIS}-{DSM} at {P}erpective{A}rg2024: A Matter of Style? Socio-Cultural Differences in Argumentation
argmining-1.18
Poster
2404.10681v2
https://aclanthology.org/2024.argmining-1.19.bib
@inproceedings{kang-etal-2024-xfact, title = "{XFACT} Team0331 at {P}erspective{A}rg2024: Sampling from Bounded Clusters for Diverse Relevant Argument Retrieval", author = "Kang, Wan Ju and Han, Jiyoung and Jung, Jaemin and Thorne, James", editor = "Ajjour, Yamen and Bar-Haim, Roy and El Baff, Roxanne and Liu, Zhexiong and Skitalinskaya, Gabriella", booktitle = "Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.argmining-1.19", pages = "182--188", abstract = "This paper reports on the argument mining system submitted to the ArgMining workshop 2024 for The Perspective Argument Retrieval Shared Task (Falk et al., 2024). We com- bine the strengths of a smaller Sentence BERT model and a Large Language Model: the for- mer is fine-tuned for a contrastive embedding objective and a classification objective whereas the latter is invoked to augment the query and populate the latent space with diverse relevant arguments. We conduct an ablation study on these components to find that each contributes substantially to the diversity and relevance cri- teria for the top-k retrieval of arguments from the given corpus.", }
This paper reports on the argument mining system submitted to the ArgMining workshop 2024 for The Perspective Argument Retrieval Shared Task (Falk et al., 2024). We com- bine the strengths of a smaller Sentence BERT model and a Large Language Model: the for- mer is fine-tuned for a contrastive embedding objective and a classification objective whereas the latter is invoked to augment the query and populate the latent space with diverse relevant arguments. We conduct an ablation study on these components to find that each contributes substantially to the diversity and relevance cri- teria for the top-k retrieval of arguments from the given corpus.
[ "Kang, Wan Ju", "Han, Jiyoung", "Jung, Jaemin", "Thorne, James" ]
{XFACT} Team0331 at {P}erspective{A}rg2024: Sampling from Bounded Clusters for Diverse Relevant Argument Retrieval
argmining-1.19
Poster
2108.10442v2
https://aclanthology.org/2024.bionlp-1.1.bib
@inproceedings{shimizu-etal-2024-improving, title = "Improving Self-training with Prototypical Learning for Source-Free Domain Adaptation on Clinical Text", author = "Shimizu, Seiji and Yada, Shuntaro and Raithel, Lisa and Aramaki, Eiji", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.1", pages = "1--13", abstract = "Domain adaptation is crucial in the clinical domain since the performance of a model trained on one domain (source) degrades seriously when applied to another domain (target). However, conventional domain adaptation methods often cannot be applied due to data sharing restrictions on source data. Source-Free Domain Adaptation (SFDA) addresses this issue by only utilizing a source model and unlabeled target data to adapt to the target domain. In SFDA, self-training is the most widely applied method involving retraining models with target data using predictions from the source model as pseudo-labels. Nevertheless, this approach is prone to contain substantial numbers of errors in pseudo-labeling and might limit model performance in the target domain. In this paper, we propose a Source-Free Prototype-based Self-training (SFPS) aiming to improve the performance of self-training. SFPS generates prototypes without accessing source data and utilizes them for prototypical learning, namely prototype-based pseudo-labeling and contrastive learning. Also, we compare entropy-based, centroid-based, and class-weights-based prototype generation methods to identify the most effective formulation of the proposed method. Experimental results across various datasets demonstrate the effectiveness of the proposed method, consistently outperforming vanilla self-training. The comparison of various prototype-generation methods identifies the most reliable generation method that improves the source model persistently. Additionally, our analysis illustrates SFPS can successfully alleviate errors in pseudo-labeling.", }
Domain adaptation is crucial in the clinical domain since the performance of a model trained on one domain (source) degrades seriously when applied to another domain (target). However, conventional domain adaptation methods often cannot be applied due to data sharing restrictions on source data. Source-Free Domain Adaptation (SFDA) addresses this issue by only utilizing a source model and unlabeled target data to adapt to the target domain. In SFDA, self-training is the most widely applied method involving retraining models with target data using predictions from the source model as pseudo-labels. Nevertheless, this approach is prone to contain substantial numbers of errors in pseudo-labeling and might limit model performance in the target domain. In this paper, we propose a Source-Free Prototype-based Self-training (SFPS) aiming to improve the performance of self-training. SFPS generates prototypes without accessing source data and utilizes them for prototypical learning, namely prototype-based pseudo-labeling and contrastive learning. Also, we compare entropy-based, centroid-based, and class-weights-based prototype generation methods to identify the most effective formulation of the proposed method. Experimental results across various datasets demonstrate the effectiveness of the proposed method, consistently outperforming vanilla self-training. The comparison of various prototype-generation methods identifies the most reliable generation method that improves the source model persistently. Additionally, our analysis illustrates SFPS can successfully alleviate errors in pseudo-labeling.
[ "Shimizu, Seiji", "Yada, Shuntaro", "Raithel, Lisa", "Aramaki, Eiji" ]
Improving Self-training with Prototypical Learning for Source-Free Domain Adaptation on Clinical Text
bionlp-1.1
Poster
2307.03042v3
https://aclanthology.org/2024.bionlp-1.2.bib
@inproceedings{zecevic-etal-2024-generation, title = "Generation and Evaluation of Synthetic Endoscopy Free-Text Reports with Differential Privacy", author = "Zecevic, Agathe and Zhang, Xinyue and Zeki, Sebastian and Roberts, Angus", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.2", pages = "14--24", abstract = "The development of NLP models in the healthcare sector faces important challenges due to the limited availability of patient data, mainly driven by privacy concerns. This study proposes the generation of synthetic free-text medical reports, specifically focusing on the gastroenterology domain, to address the scarcity of specialised datasets, while preserving patient privacy. We fine-tune BioGPT on over 90 000 endoscopy reports and integrate Differential Privacy (DP) into the training process. 10 000 DP-private synthetic reports are generated by this model. The generated synthetic data is evaluated through multiple dimensions: similarity to real datasets, language quality, and utility in both supervised and semi-supervised NLP tasks. Results suggest that while DP integration impacts text quality, it offers a promising balance between data utility and privacy, improving the performance of a real-world downstream task. Our study underscores the potential of synthetic data to facilitate model development in the healthcare domain without compromising patient privacy.", }
The development of NLP models in the healthcare sector faces important challenges due to the limited availability of patient data, mainly driven by privacy concerns. This study proposes the generation of synthetic free-text medical reports, specifically focusing on the gastroenterology domain, to address the scarcity of specialised datasets, while preserving patient privacy. We fine-tune BioGPT on over 90 000 endoscopy reports and integrate Differential Privacy (DP) into the training process. 10 000 DP-private synthetic reports are generated by this model. The generated synthetic data is evaluated through multiple dimensions: similarity to real datasets, language quality, and utility in both supervised and semi-supervised NLP tasks. Results suggest that while DP integration impacts text quality, it offers a promising balance between data utility and privacy, improving the performance of a real-world downstream task. Our study underscores the potential of synthetic data to facilitate model development in the healthcare domain without compromising patient privacy.
[ "Zecevic, Agathe", "Zhang, Xinyue", "Zeki, Sebastian", "Roberts, Angus" ]
Generation and Evaluation of Synthetic Endoscopy Free-Text Reports with Differential Privacy
bionlp-1.2
Poster
2211.10459v1
https://aclanthology.org/2024.bionlp-1.3.bib
@inproceedings{macphail-etal-2024-evaluating, title = "Evaluating the Robustness of Adverse Drug Event Classification Models using Templates", author = {MacPhail, Dorothea and Harbecke, David and Raithel, Lisa and M{\"o}ller, Sebastian}, editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.3", pages = "25--38", abstract = "An adverse drug effect (ADE) is any harmful event resulting from medical drug treatment. Despite their importance, ADEs are often under-reported in official channels. Some research has therefore turned to detecting discussions of ADEs in social media. Impressive results have been achieved in various attempts to detect ADEs. In a high-stakes domain such as medicine, however, an in-depth evaluation of a model{'}s abilities is crucial. We address the issue of thorough performance evaluation in detecting ADEs with hand-crafted templates for four capabilities, temporal order, negation, sentiment and beneficial effect. We find that models with similar performance on held-out test sets have varying results on these capabilities.", }
An adverse drug effect (ADE) is any harmful event resulting from medical drug treatment. Despite their importance, ADEs are often under-reported in official channels. Some research has therefore turned to detecting discussions of ADEs in social media. Impressive results have been achieved in various attempts to detect ADEs. In a high-stakes domain such as medicine, however, an in-depth evaluation of a model{'}s abilities is crucial. We address the issue of thorough performance evaluation in detecting ADEs with hand-crafted templates for four capabilities, temporal order, negation, sentiment and beneficial effect. We find that models with similar performance on held-out test sets have varying results on these capabilities.
[ "MacPhail, Dorothea", "Harbecke, David", "Raithel, Lisa", "M{\\\"o}ller, Sebastian" ]
Evaluating the Robustness of Adverse Drug Event Classification Models using Templates
bionlp-1.3
Poster
2407.02432v1
https://aclanthology.org/2024.bionlp-1.4.bib
@inproceedings{pandey-etal-2024-advancing, title = "Advancing Healthcare Automation: Multi-Agent System for Medical Necessity Justification", author = "Pandey, Himanshu Gautam and Amod, Akhil and Kumar, Shivang", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.4", pages = "39--49", abstract = "Prior Authorization delivers safe, appropriate, and cost-effective care that is medically justified with evidence-based guidelines. However, the process often requires labor-intensive manual comparisons between patient medical records and clinical guidelines, that is both repetitive and time-consuming. Recent developments in Large Language Models (LLMs) have shown potential in addressing complex medical NLP tasks with minimal supervision. This paper explores the application of Multi-Agent System (MAS) that utilize specialized LLM agents to automate Prior Authorization task by breaking them down into simpler and manageable sub-tasks. Our study systematically investigates the effects of various prompting strategies on these agents and benchmarks the performance of different LLMs. We demonstrate that GPT-4 achieves an accuracy of 86.2{\%} in predicting checklist item-level judgments with evidence, and 95.6{\%} in determining overall checklist judgment. Additionally, we explore how these agents can contribute to explainability of steps taken in the process, thereby enhancing trust and transparency in the system.", }
Prior Authorization delivers safe, appropriate, and cost-effective care that is medically justified with evidence-based guidelines. However, the process often requires labor-intensive manual comparisons between patient medical records and clinical guidelines, that is both repetitive and time-consuming. Recent developments in Large Language Models (LLMs) have shown potential in addressing complex medical NLP tasks with minimal supervision. This paper explores the application of Multi-Agent System (MAS) that utilize specialized LLM agents to automate Prior Authorization task by breaking them down into simpler and manageable sub-tasks. Our study systematically investigates the effects of various prompting strategies on these agents and benchmarks the performance of different LLMs. We demonstrate that GPT-4 achieves an accuracy of 86.2{\%} in predicting checklist item-level judgments with evidence, and 95.6{\%} in determining overall checklist judgment. Additionally, we explore how these agents can contribute to explainability of steps taken in the process, thereby enhancing trust and transparency in the system.
[ "P", "ey, Himanshu Gautam", "Amod, Akhil", "Kumar, Shivang" ]
Advancing Healthcare Automation: Multi-Agent System for Medical Necessity Justification
bionlp-1.4
Poster
2404.17977v2
https://aclanthology.org/2024.bionlp-1.5.bib
@inproceedings{ceballos-arroyo-etal-2024-open, title = "Open (Clinical) {LLM}s are Sensitive to Instruction Phrasings", author = "Ceballos-Arroyo, Alberto Mario and Munnangi, Monica and Sun, Jiuding and Zhang, Karen and McInerney, Jered and Wallace, Byron C. and Amir, Silvio", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.5", pages = "50--71", abstract = "Instruction-tuned Large Language Models (LLMs) can perform a wide range of tasks given natural language instructions to do so, but they are sensitive to how such instructions are phrased. This issue is especially concerning in healthcare, as clinicians are unlikely to be experienced prompt engineers and the potential consequences of inaccurate outputs are heightened in this domain. This raises a practical question: How robust are instruction-tuned LLMs to natural variations in the instructions provided for clinical NLP tasks? We collect prompts from medical doctors across a range of tasks and quantify the sensitivity of seven LLMs{---}some general, others specialized{---}to natural (i.e., non-adversarial) instruction phrasings. We find that performance varies substantially across all models, and that{---}perhaps surprisingly{---}domain-specific models explicitly trained on clinical data are especially brittle, compared to their general domain counterparts. Further, arbitrary phrasing differences can affect fairness, e.g., valid but distinct instructions for mortality prediction yield a range both in overall performance, and in terms of differences between demographic groups.", }
Instruction-tuned Large Language Models (LLMs) can perform a wide range of tasks given natural language instructions to do so, but they are sensitive to how such instructions are phrased. This issue is especially concerning in healthcare, as clinicians are unlikely to be experienced prompt engineers and the potential consequences of inaccurate outputs are heightened in this domain. This raises a practical question: How robust are instruction-tuned LLMs to natural variations in the instructions provided for clinical NLP tasks? We collect prompts from medical doctors across a range of tasks and quantify the sensitivity of seven LLMs{---}some general, others specialized{---}to natural (i.e., non-adversarial) instruction phrasings. We find that performance varies substantially across all models, and that{---}perhaps surprisingly{---}domain-specific models explicitly trained on clinical data are especially brittle, compared to their general domain counterparts. Further, arbitrary phrasing differences can affect fairness, e.g., valid but distinct instructions for mortality prediction yield a range both in overall performance, and in terms of differences between demographic groups.
[ "Ceballos-Arroyo, Alberto Mario", "Munnangi, Monica", "Sun, Jiuding", "Zhang, Karen", "McInerney, Jered", "Wallace, Byron C.", "Amir, Silvio" ]
Open (Clinical) {LLM}s are Sensitive to Instruction Phrasings
bionlp-1.5
Poster
2407.09429v1
https://aclanthology.org/2024.bionlp-1.6.bib
@inproceedings{kougia-etal-2024-analysing, title = "Analysing zero-shot temporal relation extraction on clinical notes using temporal consistency", author = "Kougia, Vasiliki and Sedova, Anastasiia and Stephan, Andreas Joseph and Zaporojets, Klim and Roth, Benjamin", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.6", pages = "72--84", abstract = "This paper presents the first study for temporal relation extraction in a zero-shot setting focusing on biomedical text. We employ two types of prompts and five Large Language Models (LLMs; GPT-3.5, Mixtral, Llama 2, Gemma, and PMC-LLaMA) to obtain responses about the temporal relations between two events. Our experiments demonstrate that LLMs struggle in the zero-shot setting, performing worse than fine-tuned specialized models in terms of F1 score. This highlights the challenging nature of this task and underscores the need for further research to enhance the performance of LLMs in this context. We further contribute a novel comprehensive temporal analysis by calculating consistency scores for each LLM. Our findings reveal that LLMs face challenges in providing responses consistent with the temporal properties of uniqueness and transitivity. Moreover, we study the relation between the temporal consistency of an LLM and its accuracy, and whether the latter can be improved by solving temporal inconsistencies. Our analysis shows that even when temporal consistency is achieved, the predictions can remain inaccurate.", }
This paper presents the first study for temporal relation extraction in a zero-shot setting focusing on biomedical text. We employ two types of prompts and five Large Language Models (LLMs; GPT-3.5, Mixtral, Llama 2, Gemma, and PMC-LLaMA) to obtain responses about the temporal relations between two events. Our experiments demonstrate that LLMs struggle in the zero-shot setting, performing worse than fine-tuned specialized models in terms of F1 score. This highlights the challenging nature of this task and underscores the need for further research to enhance the performance of LLMs in this context. We further contribute a novel comprehensive temporal analysis by calculating consistency scores for each LLM. Our findings reveal that LLMs face challenges in providing responses consistent with the temporal properties of uniqueness and transitivity. Moreover, we study the relation between the temporal consistency of an LLM and its accuracy, and whether the latter can be improved by solving temporal inconsistencies. Our analysis shows that even when temporal consistency is achieved, the predictions can remain inaccurate.
[ "Kougia, Vasiliki", "Sedova, Anastasiia", "Stephan, Andreas Joseph", "Zaporojets, Klim", "Roth, Benjamin" ]
Analysing zero-shot temporal relation extraction on clinical notes using temporal consistency
bionlp-1.6
Poster
2406.11486v1
https://aclanthology.org/2024.bionlp-1.7.bib
@inproceedings{xu-etal-2024-overview, title = "Overview of the First Shared Task on Clinical Text Generation: {RRG}24 and {``}Discharge Me!{''}", author = "Xu, Justin and Chen, Zhihong and Johnston, Andrew and Blankemeier, Louis and Varma, Maya and Hom, Jason and Collins, William J. and Modi, Ankit and Lloyd, Robert and Hopkins, Benjamin and Langlotz, Curtis and Delbrouck, Jean-Benoit", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.7", pages = "85--98", abstract = "Recent developments in natural language generation have tremendous implications for healthcare. For instance, state-of-the-art systems could automate the generation of sections in clinical reports to alleviate physician workload and streamline hospital documentation. To explore these applications, we present a shared task consisting of two subtasks: (1) Radiology Report Generation (RRG24) and (2) Discharge Summary Generation ({``}Discharge Me!{''}). RRG24 involves generating the {`}Findings{'} and {`}Impression{'} sections of radiology reports given chest X-rays. {``}Discharge Me!{''} involves generating the {`}Brief Hospital Course{'} and '{`}Discharge Instructions{'} sections of discharge summaries for patients admitted through the emergency department. {``}Discharge Me!{''} submissions were subsequently reviewed by a team of clinicians. Both tasks emphasize the goal of reducing clinician burnout and repetitive workloads by generating documentation. We received 201 submissions from across 8 teams for RRG24, and 211 submissions from across 16 teams for {``}Discharge Me!{''}.", }
Recent developments in natural language generation have tremendous implications for healthcare. For instance, state-of-the-art systems could automate the generation of sections in clinical reports to alleviate physician workload and streamline hospital documentation. To explore these applications, we present a shared task consisting of two subtasks: (1) Radiology Report Generation (RRG24) and (2) Discharge Summary Generation ({``}Discharge Me!{''}). RRG24 involves generating the {`}Findings{'} and {`}Impression{'} sections of radiology reports given chest X-rays. {``}Discharge Me!{''} involves generating the {`}Brief Hospital Course{'} and '{`}Discharge Instructions{'} sections of discharge summaries for patients admitted through the emergency department. {``}Discharge Me!{''} submissions were subsequently reviewed by a team of clinicians. Both tasks emphasize the goal of reducing clinician burnout and repetitive workloads by generating documentation. We received 201 submissions from across 8 teams for RRG24, and 211 submissions from across 16 teams for {``}Discharge Me!{''}.
[ "Xu, Justin", "Chen, Zhihong", "Johnston, Andrew", "Blankemeier, Louis", "Varma, Maya", "Hom, Jason", "Collins, William J.", "Modi, Ankit", "Lloyd, Robert", "Hopkins, Benjamin", "Langlotz, Curtis", "Delbrouck, Jean-Benoit" ]
Overview of the First Shared Task on Clinical Text Generation: {RRG}24 and {``}Discharge Me!{''}
bionlp-1.7
Poster
2407.15359v1
https://aclanthology.org/2024.bionlp-1.8.bib
@inproceedings{nicolson-etal-2024-e, title = "e-Health {CSIRO} at {RRG}24: Entropy-Augmented Self-Critical Sequence Training for Radiology Report Generation", author = "Nicolson, Aaron and Liu, Jinghui and Dowling, Jason and Nguyen, Anthony and Koopman, Bevan", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.8", pages = "99--104", abstract = "The core novelty of our approach lies in the addition of entropy regularisation to self-critical sequence training. This helps maintain a higher entropy in the token distribution, preventing overfitting to common phrases and ensuring a broader exploration of the vocabulary during training, which is essential for handling the diversity of the radiology reports in the RRG24 datasets. We apply this to a multimodal language model with RadGraph as the reward. Additionally, our model incorporates several other aspects. We use token type embeddings to differentiate between findings and impression section tokens, as well as image embeddings. To handle missing sections, we employ special tokens. We also utilise an attention mask with non-causal masking for the image embeddings and a causal mask for the report token embeddings.", }
The core novelty of our approach lies in the addition of entropy regularisation to self-critical sequence training. This helps maintain a higher entropy in the token distribution, preventing overfitting to common phrases and ensuring a broader exploration of the vocabulary during training, which is essential for handling the diversity of the radiology reports in the RRG24 datasets. We apply this to a multimodal language model with RadGraph as the reward. Additionally, our model incorporates several other aspects. We use token type embeddings to differentiate between findings and impression section tokens, as well as image embeddings. To handle missing sections, we employ special tokens. We also utilise an attention mask with non-causal masking for the image embeddings and a causal mask for the report token embeddings.
[ "Nicolson, Aaron", "Liu, Jinghui", "Dowling, Jason", "Nguyen, Anthony", "Koopman, Bevan" ]
e-Health {CSIRO} at {RRG}24: Entropy-Augmented Self-Critical Sequence Training for Radiology Report Generation
bionlp-1.8
Poster
2408.03500v1
https://aclanthology.org/2024.bionlp-1.9.bib
@inproceedings{damm-etal-2024-wispermed, title = "{W}is{P}er{M}ed at {``}Discharge Me!{''}: Advancing Text Generation in Healthcare with Large Language Models, Dynamic Expert Selection, and Priming Techniques on {MIMIC}-{IV}", author = {Damm, Hendrik and Pakull, Tabea Margareta Grace and Ery{\i}lmaz, Bahad{\i}r and Becker, Helmut and Idrissi-Yaghir, Ahmad and Sch{\"a}fer, Henning and Schultenk{\"a}mper, Sergej and Friedrich, Christoph M.}, editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.9", pages = "105--121", abstract = "This study aims to leverage state of the art language models to automate generating the {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} sections of Discharge Summaries from the MIMIC-IV dataset, reducing clinicians{'} administrative workload. We investigate how automation can improve documentation accuracy, alleviate clinician burnout, and enhance operational efficacy in healthcare facilities. This research was conducted within our participation in the Shared Task Discharge Me! at BioNLP @ ACL 2024. Various strategies were employed, including Few-Shot learning, instruction tuning, and Dynamic Expert Selection (DES), to develop models capable of generating the required text sections. Utilizing an additional clinical domain-specific dataset demonstrated substantial potential to enhance clinical language processing. The DES method, which optimizes the selection of text outputs from multiple predictions, proved to be especially effective. It achieved the highest overall score of 0.332 in the competition, surpassing single-model outputs. This finding suggests that advanced deep learning methods in combination with DES can effectively automate parts of electronic health record documentation. These advancements could enhance patient care by freeing clinician time for patient interactions. The integration of text selection strategies represents a promising avenue for further research.", }
This study aims to leverage state of the art language models to automate generating the {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} sections of Discharge Summaries from the MIMIC-IV dataset, reducing clinicians{'} administrative workload. We investigate how automation can improve documentation accuracy, alleviate clinician burnout, and enhance operational efficacy in healthcare facilities. This research was conducted within our participation in the Shared Task Discharge Me! at BioNLP @ ACL 2024. Various strategies were employed, including Few-Shot learning, instruction tuning, and Dynamic Expert Selection (DES), to develop models capable of generating the required text sections. Utilizing an additional clinical domain-specific dataset demonstrated substantial potential to enhance clinical language processing. The DES method, which optimizes the selection of text outputs from multiple predictions, proved to be especially effective. It achieved the highest overall score of 0.332 in the competition, surpassing single-model outputs. This finding suggests that advanced deep learning methods in combination with DES can effectively automate parts of electronic health record documentation. These advancements could enhance patient care by freeing clinician time for patient interactions. The integration of text selection strategies represents a promising avenue for further research.
[ "Damm, Hendrik", "Pakull, Tabea Margareta Grace", "Ery{\\i}lmaz, Bahad{\\i}r", "Becker, Helmut", "Idrissi-Yaghir, Ahmad", "Sch{\\\"a}fer, Henning", "Schultenk{\\\"a}mper, Sergej", "Friedrich, Christoph M." ]
{W}is{P}er{M}ed at {``}Discharge Me!{''}: Advancing Text Generation in Healthcare with Large Language Models, Dynamic Expert Selection, and Priming Techniques on {MIMIC}-{IV}
bionlp-1.9
Poster
2405.11255v1
https://aclanthology.org/2024.bionlp-1.10.bib
@inproceedings{goldsack-etal-2024-overview, title = "Overview of the {B}io{L}ay{S}umm 2024 Shared Task on the Lay Summarization of Biomedical Research Articles", author = "Goldsack, Tomas and Scarton, Carolina and Shardlow, Matthew and Lin, Chenghua", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.10", pages = "122--131", abstract = "This paper presents the setup and results of the second edition of the BioLaySumm shared task on the Lay Summarisation of Biomedical Research Articles, hosted at the BioNLP Workshop at ACL 2024. In this task edition, we aim to build on the first edition{'}s success by further increasing research interest in this important task and encouraging participants to explore novel approaches that will help advance the state-of-the-art. Encouragingly, we found research interest in the task to be high, with this edition of the task attracting a total of 53 participating teams, a significant increase in engagement from the previous edition. Overall, our results show that a broad range of innovative approaches were adopted by task participants, with a predictable shift towards the use of Large Language Models (LLMs).", }
This paper presents the setup and results of the second edition of the BioLaySumm shared task on the Lay Summarisation of Biomedical Research Articles, hosted at the BioNLP Workshop at ACL 2024. In this task edition, we aim to build on the first edition{'}s success by further increasing research interest in this important task and encouraging participants to explore novel approaches that will help advance the state-of-the-art. Encouragingly, we found research interest in the task to be high, with this edition of the task attracting a total of 53 participating teams, a significant increase in engagement from the previous edition. Overall, our results show that a broad range of innovative approaches were adopted by task participants, with a predictable shift towards the use of Large Language Models (LLMs).
[ "Goldsack, Tomas", "Scarton, Carolina", "Shardlow, Matthew", "Lin, Chenghua" ]
Overview of the {B}io{L}ay{S}umm 2024 Shared Task on the Lay Summarization of Biomedical Research Articles
bionlp-1.10
Poster
2309.17332v2
https://aclanthology.org/2024.bionlp-1.11.bib
@inproceedings{you-etal-2024-uiuc, title = "{UIUC}{\_}{B}io{NLP} at {B}io{L}ay{S}umm: An Extract-then-Summarize Approach Augmented with {W}ikipedia Knowledge for Biomedical Lay Summarization", author = "You, Zhiwen and Radhakrishna, Shruthan and Ming, Shufan and Kilicoglu, Halil", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.11", pages = "132--143", abstract = "As the number of scientific publications is growing at a rapid pace, it is difficult for laypeople to keep track of and understand the latest scientific advances, especially in the biomedical domain. While the summarization of scientific publications has been widely studied, research on summarization targeting laypeople has remained scarce. In this study, considering the lengthy input of biomedical articles, we have developed a lay summarization system through an extract-then-summarize framework with large language models (LLMs) to summarize biomedical articles for laypeople. Using a fine-tuned GPT-3.5 model, our approach achieves the highest overall ranking and demonstrates the best relevance performance in the BioLaySumm 2024 shared task.", }
As the number of scientific publications is growing at a rapid pace, it is difficult for laypeople to keep track of and understand the latest scientific advances, especially in the biomedical domain. While the summarization of scientific publications has been widely studied, research on summarization targeting laypeople has remained scarce. In this study, considering the lengthy input of biomedical articles, we have developed a lay summarization system through an extract-then-summarize framework with large language models (LLMs) to summarize biomedical articles for laypeople. Using a fine-tuned GPT-3.5 model, our approach achieves the highest overall ranking and demonstrates the best relevance performance in the BioLaySumm 2024 shared task.
[ "You, Zhiwen", "Radhakrishna, Shruthan", "Ming, Shufan", "Kilicoglu, Halil" ]
{UIUC}{\_}{B}io{NLP} at {B}io{L}ay{S}umm: An Extract-then-Summarize Approach Augmented with {W}ikipedia Knowledge for Biomedical Lay Summarization
bionlp-1.11
Poster
2310.15702v1
https://aclanthology.org/2024.bionlp-1.12.bib
@inproceedings{gonzalez-hernandez-etal-2024-end, title = "End-to-End Relation Extraction of Pharmacokinetic Estimates from the Scientific Literature", author = "Gonzalez Hernandez, Ferran and Smith, Victoria and Nguyen, Quang and Chotsiri, Palang and Wattanakul, Thanaporn and Antonio Cordero, Jos{\'e} and Ballester, Maria Rosa and Sole, Albert and Mundin, Gill and Lilaonitkul, Watjana and Standing, Joseph F. and Kloprogge, Frank", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.12", pages = "144--154", abstract = "The lack of comprehensive and standardised databases containing Pharmacokinetic (PK) parameters presents a challenge in the drug development pipeline. Efficiently managing the increasing volume of published PK Parameters requires automated approaches that centralise information from diverse studies. In this work, we present the Pharmacokinetic Relation Extraction Dataset (PRED), a novel, manually curated corpus developed by pharmacometricians and NLP specialists, covering multiple types of PK parameters and numerical expressions reported in open-access scientific articles. PRED covers annotations for various entities and relations involved in PK parameter measurements from 3,600 sentences. We also introduce an end-to-end relation extraction model based on BioBERT, which is trained with joint named entity recognition (NER) and relation extraction objectives. The optimal pipeline achieved a micro-average F1-score of 94{\%} for NER and over 85{\%} F1-score across all relation types. This work represents the first resource for training and evaluating models for PK end-to-end extraction across multiple parameters and study types. We make our corpus and model openly available to accelerate the construction of large PK databases and to support similar endeavours in other scientific disciplines.", }
The lack of comprehensive and standardised databases containing Pharmacokinetic (PK) parameters presents a challenge in the drug development pipeline. Efficiently managing the increasing volume of published PK Parameters requires automated approaches that centralise information from diverse studies. In this work, we present the Pharmacokinetic Relation Extraction Dataset (PRED), a novel, manually curated corpus developed by pharmacometricians and NLP specialists, covering multiple types of PK parameters and numerical expressions reported in open-access scientific articles. PRED covers annotations for various entities and relations involved in PK parameter measurements from 3,600 sentences. We also introduce an end-to-end relation extraction model based on BioBERT, which is trained with joint named entity recognition (NER) and relation extraction objectives. The optimal pipeline achieved a micro-average F1-score of 94{\%} for NER and over 85{\%} F1-score across all relation types. This work represents the first resource for training and evaluating models for PK end-to-end extraction across multiple parameters and study types. We make our corpus and model openly available to accelerate the construction of large PK databases and to support similar endeavours in other scientific disciplines.
[ "Gonzalez Hern", "ez, Ferran", "Smith, Victoria", "Nguyen, Quang", "Chotsiri, Palang", "Wattanakul, Thanaporn", "Antonio Cordero, Jos{\\'e}", "Ballester, Maria Rosa", "Sole, Albert", "Mundin, Gill", "Lilaonitkul, Watjana", "St", "ing, Joseph F.", "Kloprogge, Frank" ]
End-to-End Relation Extraction of Pharmacokinetic Estimates from the Scientific Literature
bionlp-1.12
Poster
1412.0744v2
https://aclanthology.org/2024.bionlp-1.13.bib
@inproceedings{yang-etal-2024-kg, title = "{KG}-Rank: Enhancing Large Language Models for Medical {QA} with Knowledge Graphs and Ranking Techniques", author = "Yang, Rui and Liu, Haoran and Marrese-Taylor, Edison and Zeng, Qingcheng and Ke, Yuhe and Li, Wanxin and Cheng, Lechao and Chen, Qingyu and Caverlee, James and Matsuo, Yutaka and Li, Irene", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.13", pages = "155--166", abstract = "Large Language Models (LLMs) have significantly advanced healthcare innovation on generation capabilities. However, their application in real clinical settings is challenging due to potential deviations from medical facts and inherent biases. In this work, we develop an augmented LLM framework, KG-Rank, which leverages a medical knowledge graph (KG) with ranking and re-ranking techniques, aiming to improve free-text question-answering (QA) in the medical domain. Specifically, upon receiving a question, we initially retrieve triplets from a medical KG to gather factual information. Subsequently, we innovatively apply ranking methods to refine the ordering of these triplets, aiming to yield more precise answers. To the best of our knowledge, KG-Rank is the first application of ranking models combined with KG in medical QA specifically for generating long answers. Evaluation of four selected medical QA datasets shows that KG-Rank achieves an improvement of over 18{\%} in the ROUGE-L score. Moreover, we extend KG-Rank to open domains, where it realizes a 14{\%} improvement in ROUGE-L, showing the effectiveness and potential of KG-Rank.", }
Large Language Models (LLMs) have significantly advanced healthcare innovation on generation capabilities. However, their application in real clinical settings is challenging due to potential deviations from medical facts and inherent biases. In this work, we develop an augmented LLM framework, KG-Rank, which leverages a medical knowledge graph (KG) with ranking and re-ranking techniques, aiming to improve free-text question-answering (QA) in the medical domain. Specifically, upon receiving a question, we initially retrieve triplets from a medical KG to gather factual information. Subsequently, we innovatively apply ranking methods to refine the ordering of these triplets, aiming to yield more precise answers. To the best of our knowledge, KG-Rank is the first application of ranking models combined with KG in medical QA specifically for generating long answers. Evaluation of four selected medical QA datasets shows that KG-Rank achieves an improvement of over 18{\%} in the ROUGE-L score. Moreover, we extend KG-Rank to open domains, where it realizes a 14{\%} improvement in ROUGE-L, showing the effectiveness and potential of KG-Rank.
[ "Yang, Rui", "Liu, Haoran", "Marrese-Taylor, Edison", "Zeng, Qingcheng", "Ke, Yuhe", "Li, Wanxin", "Cheng, Lechao", "Chen, Qingyu", "Caverlee, James", "Matsuo, Yutaka", "Li, Irene" ]
{KG}-Rank: Enhancing Large Language Models for Medical {QA} with Knowledge Graphs and Ranking Techniques
bionlp-1.13
Poster
2403.05881v3
https://aclanthology.org/2024.bionlp-1.14.bib
@inproceedings{kim-etal-2024-medexqa, title = "{M}ed{E}x{QA}: Medical Question Answering Benchmark with Multiple Explanations", author = "Kim, Yunsoo and Wu, Jinge and Abdulle, Yusuf and Wu, Honghan", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.14", pages = "167--181", abstract = "This paper introduces MedExQA, a novel benchmark in medical question-answering, to evaluate large language models{'} (LLMs) understanding of medical knowledge through explanations. By constructing datasets across five distinct medical specialties that are underrepresented in current datasets and further incorporating multiple explanations for each question-answer pair, we address a major gap in current medical QA benchmarks which is the absence of comprehensive assessments of LLMs{'} ability to generate nuanced medical explanations. Our work highlights the importance of explainability in medical LLMs, proposes an effective methodology for evaluating models beyond classification accuracy, and sheds light on one specific domain, speech language pathology, where current LLMs including GPT4 lack good understanding. Our results show generation evaluation with multiple explanations aligns better with human assessment, highlighting an opportunity for a more robust automated comprehension assessment for LLMs. To diversify open-source medical LLMs (currently mostly based on Llama2), this work also proposes a new medical model, MedPhi-2, based on Phi-2 (2.7B). The model outperformed medical LLMs based on Llama2-70B in generating explanations, showing its effectiveness in the resource-constrained medical domain. We will share our benchmark datasets and the trained model.", }
This paper introduces MedExQA, a novel benchmark in medical question-answering, to evaluate large language models{'} (LLMs) understanding of medical knowledge through explanations. By constructing datasets across five distinct medical specialties that are underrepresented in current datasets and further incorporating multiple explanations for each question-answer pair, we address a major gap in current medical QA benchmarks which is the absence of comprehensive assessments of LLMs{'} ability to generate nuanced medical explanations. Our work highlights the importance of explainability in medical LLMs, proposes an effective methodology for evaluating models beyond classification accuracy, and sheds light on one specific domain, speech language pathology, where current LLMs including GPT4 lack good understanding. Our results show generation evaluation with multiple explanations aligns better with human assessment, highlighting an opportunity for a more robust automated comprehension assessment for LLMs. To diversify open-source medical LLMs (currently mostly based on Llama2), this work also proposes a new medical model, MedPhi-2, based on Phi-2 (2.7B). The model outperformed medical LLMs based on Llama2-70B in generating explanations, showing its effectiveness in the resource-constrained medical domain. We will share our benchmark datasets and the trained model.
[ "Kim, Yunsoo", "Wu, Jinge", "Abdulle, Yusuf", "Wu, Honghan" ]
{M}ed{E}x{QA}: Medical Question Answering Benchmark with Multiple Explanations
bionlp-1.14
Poster
2406.06331v2
https://aclanthology.org/2024.bionlp-1.15.bib
@inproceedings{yao-etal-2024-clinicians, title = "Do Clinicians Know How to Prompt? The Need for Automatic Prompt Optimization Help in Clinical Note Generation", author = "Yao, Zonghai and Jaafar, Ahmed and Wang, Beining and Yang, Zhichao and Yu, Hong", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.15", pages = "182--201", abstract = "This study examines the effect of prompt engineering on the performance of Large Language Models (LLMs) in clinical note generation. We introduce an Automatic Prompt Optimization (APO) framework to refine initial prompts and compare the outputs of medical experts, non-medical experts, and APO-enhanced GPT3.5 and GPT4. Results highlight GPT4-APO{'}s superior performance in standardizing prompt quality across clinical note sections. A human-in-the-loop approach shows that experts maintain content quality post-APO, with a preference for their own modifications, suggesting the value of expert customization. We recommend a two-phase optimization process, leveraging APO-GPT4 for consistency and expert input for personalization.", }
This study examines the effect of prompt engineering on the performance of Large Language Models (LLMs) in clinical note generation. We introduce an Automatic Prompt Optimization (APO) framework to refine initial prompts and compare the outputs of medical experts, non-medical experts, and APO-enhanced GPT3.5 and GPT4. Results highlight GPT4-APO{'}s superior performance in standardizing prompt quality across clinical note sections. A human-in-the-loop approach shows that experts maintain content quality post-APO, with a preference for their own modifications, suggesting the value of expert customization. We recommend a two-phase optimization process, leveraging APO-GPT4 for consistency and expert input for personalization.
[ "Yao, Zonghai", "Jaafar, Ahmed", "Wang, Beining", "Yang, Zhichao", "Yu, Hong" ]
Do Clinicians Know How to Prompt? The Need for Automatic Prompt Optimization Help in Clinical Note Generation
bionlp-1.15
Poster
2312.16066v1
https://aclanthology.org/2024.bionlp-1.16.bib
@inproceedings{sinha-etal-2024-domain, title = "Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification?", author = "Sinha, Aman and Mickus, Timothee and Clausel, Marianne and Constant, Mathieu and Coubez, Xavier", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.16", pages = "202--211", abstract = "The success of pretrained language models (PLMs) across a spate of use-cases has led to significant investment from the NLP community towards building domain-specific foundational models. On the other hand, in mission critical settings such as biomedical applications, other aspects also factor in{---}chief of which is a model{'}s ability to produce reasonable estimates of its own uncertainty. In the present study, we discuss these two desiderata through the lens of how they shape the entropy of a model{'}s output probability distribution. We find that domain specificity and uncertainty awareness can often be successfully combined, but the exact task at hand weighs in much more strongly.", }
The success of pretrained language models (PLMs) across a spate of use-cases has led to significant investment from the NLP community towards building domain-specific foundational models. On the other hand, in mission critical settings such as biomedical applications, other aspects also factor in{---}chief of which is a model{'}s ability to produce reasonable estimates of its own uncertainty. In the present study, we discuss these two desiderata through the lens of how they shape the entropy of a model{'}s output probability distribution. We find that domain specificity and uncertainty awareness can often be successfully combined, but the exact task at hand weighs in much more strongly.
[ "Sinha, Aman", "Mickus, Timothee", "Clausel, Marianne", "Constant, Mathieu", "Coubez, Xavier" ]
Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification?
bionlp-1.16
Poster
0308033v1
https://aclanthology.org/2024.bionlp-1.17.bib
@inproceedings{fytas-etal-2024-rule, title = "Can Rule-Based Insights Enhance {LLM}s for Radiology Report Classification? Introducing the {R}ad{P}rompt Methodology.", author = "Fytas, Panagiotis and Breger, Anna and Selby, Ian and Baker, Simon and Shahipasand, Shahab and Korhonen, Anna", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.17", pages = "212--235", abstract = "Developing imaging models capable of detecting pathologies from chest X-rays can be cost and time-prohibitive for large datasets as it requires supervision to attain state-of-the-art performance. Instead, labels extracted from radiology reports may serve as distant supervision since these are routinely generated as part of clinical practice. Despite their widespread use, current rule-based methods for label extraction rely on extensive rule sets that are limited in their robustness to syntactic variability. To alleviate these limitations, we introduce RadPert, a rule-based system that integrates an uncertainty-aware information schema with a streamlined set of rules, enhancing performance. Additionally, we have developed RadPrompt, a multi-turn prompting strategy that leverages RadPert to bolster the zero-shot predictive capabilities of large language models, achieving a statistically significant improvement in weighted average F1 score over GPT-4 Turbo. Most notably, RadPrompt surpasses both its underlying models, showcasing the synergistic potential of LLMs with rule-based models. We have evaluated our methods on two English Corpora: the MIMIC-CXR gold-standard test set and a gold-standard dataset collected from the Cambridge University Hospitals.", }
Developing imaging models capable of detecting pathologies from chest X-rays can be cost and time-prohibitive for large datasets as it requires supervision to attain state-of-the-art performance. Instead, labels extracted from radiology reports may serve as distant supervision since these are routinely generated as part of clinical practice. Despite their widespread use, current rule-based methods for label extraction rely on extensive rule sets that are limited in their robustness to syntactic variability. To alleviate these limitations, we introduce RadPert, a rule-based system that integrates an uncertainty-aware information schema with a streamlined set of rules, enhancing performance. Additionally, we have developed RadPrompt, a multi-turn prompting strategy that leverages RadPert to bolster the zero-shot predictive capabilities of large language models, achieving a statistically significant improvement in weighted average F1 score over GPT-4 Turbo. Most notably, RadPrompt surpasses both its underlying models, showcasing the synergistic potential of LLMs with rule-based models. We have evaluated our methods on two English Corpora: the MIMIC-CXR gold-standard test set and a gold-standard dataset collected from the Cambridge University Hospitals.
[ "Fytas, Panagiotis", "Breger, Anna", "Selby, Ian", "Baker, Simon", "Shahipas", ", Shahab", "Korhonen, Anna" ]
Can Rule-Based Insights Enhance {LLM}s for Radiology Report Classification? Introducing the {R}ad{P}rompt Methodology.
bionlp-1.17
Poster
2408.04121v1
https://aclanthology.org/2024.bionlp-1.18.bib
@inproceedings{hijazi-etal-2024-using, title = "Using Large Language Models to Evaluate Biomedical Query-Focused Summarisation", author = "Hijazi, Hashem and Molla, Diego and Nguyen, Vincent and Karimi, Sarvnaz", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.18", pages = "236--242", abstract = "Biomedical question-answering systems remain popular for biomedical experts interacting with the literature to answer their medical questions. However, these systems are difficult to evaluate in the absence of costly human experts. Therefore, automatic evaluation metrics are often used in this space. Traditional automatic metrics such as ROUGE or BLEU, which rely on token overlap, have shown a low correlation with humans. We present a study that uses large language models (LLMs) to automatically evaluate systems from an international challenge on biomedical semantic indexing and question answering, called BioASQ. We measure the agreement of LLM-produced scores against human judgements. We show that LLMs correlate similarly to lexical methods when using basic prompting techniques. However, by aggregating evaluators with LLMs or by fine-tuning, we find that our methods outperform the baselines by a large margin, achieving a Spearman correlation of 0.501 and 0.511, respectively.", }
Biomedical question-answering systems remain popular for biomedical experts interacting with the literature to answer their medical questions. However, these systems are difficult to evaluate in the absence of costly human experts. Therefore, automatic evaluation metrics are often used in this space. Traditional automatic metrics such as ROUGE or BLEU, which rely on token overlap, have shown a low correlation with humans. We present a study that uses large language models (LLMs) to automatically evaluate systems from an international challenge on biomedical semantic indexing and question answering, called BioASQ. We measure the agreement of LLM-produced scores against human judgements. We show that LLMs correlate similarly to lexical methods when using basic prompting techniques. However, by aggregating evaluators with LLMs or by fine-tuning, we find that our methods outperform the baselines by a large margin, achieving a Spearman correlation of 0.501 and 0.511, respectively.
[ "Hijazi, Hashem", "Molla, Diego", "Nguyen, Vincent", "Karimi, Sarvnaz" ]
Using Large Language Models to Evaluate Biomedical Query-Focused Summarisation
bionlp-1.18
Poster
2310.15702v1
https://aclanthology.org/2024.bionlp-1.19.bib
@inproceedings{caralt-etal-2024-continuous, title = "Continuous Predictive Modeling of Clinical Notes and {ICD} Codes in Patient Health Records", author = "Caralt, Mireia Hernandez and Ng, Clarence Boon Liang and Rei, Marek", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.19", pages = "243--255", abstract = "Electronic Health Records (EHR) serve as a valuable source of patient information, offering insights into medical histories, treatments, and outcomes. Previous research has developed systems for detecting applicable ICD codes that should be assigned while writing a given EHR document, mainly focusing on discharge summaries written at the end of a hospital stay. In this work, we investigate the potential of predicting these codes for the whole patient stay at different time points during their stay, even before they are officially assigned by clinicians. The development of methods to predict diagnoses and treatments earlier in advance could open opportunities for predictive medicine, such as identifying disease risks sooner, suggesting treatments, and optimizing resource allocation. Our experiments show that predictions regarding final ICD codes can be made already two days after admission and we propose a custom model that improves performance on this early prediction task.", }
Electronic Health Records (EHR) serve as a valuable source of patient information, offering insights into medical histories, treatments, and outcomes. Previous research has developed systems for detecting applicable ICD codes that should be assigned while writing a given EHR document, mainly focusing on discharge summaries written at the end of a hospital stay. In this work, we investigate the potential of predicting these codes for the whole patient stay at different time points during their stay, even before they are officially assigned by clinicians. The development of methods to predict diagnoses and treatments earlier in advance could open opportunities for predictive medicine, such as identifying disease risks sooner, suggesting treatments, and optimizing resource allocation. Our experiments show that predictions regarding final ICD codes can be made already two days after admission and we propose a custom model that improves performance on this early prediction task.
[ "Caralt, Mireia Hern", "ez", "Ng, Clarence Boon Liang", "Rei, Marek" ]
Continuous Predictive Modeling of Clinical Notes and {ICD} Codes in Patient Health Records
bionlp-1.19
Poster
2405.11622v2
https://aclanthology.org/2024.bionlp-1.20.bib
@inproceedings{vatsal-singh-2024-gpt, title = "Can {GPT} Redefine Medical Understanding? Evaluating {GPT} on Biomedical Machine Reading Comprehension", author = "Vatsal, Shubham and Singh, Ayush", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.20", pages = "256--265", abstract = "Large language models (LLMs) have shown remarkable performance on many tasks in different domains. However, their performance in contextual biomedical machine reading comprehension (MRC) has not been evaluated in depth. In this work, we evaluate GPT on four contextual biomedical MRC benchmarks. We experiment with different conventional prompting techniques as well as introduce our own novel prompting method. To solve some of the retrieval problems inherent to LLMs, we propose a prompting strategy named Implicit Retrieval Augmented Generation (RAG) that alleviates the need for using vector databases to retrieve important chunks in traditional RAG setups. Moreover, we report qualitative assessments on the natural language generation outputs from our approach. The results show that our new prompting technique is able to get the best performance in two out of four datasets and ranks second in rest of them. Experiments show that modern-day LLMs like GPT even in a zero-shot setting can outperform supervised models, leading to new state-of-the-art (SoTA) results on two of the benchmarks.", }
Large language models (LLMs) have shown remarkable performance on many tasks in different domains. However, their performance in contextual biomedical machine reading comprehension (MRC) has not been evaluated in depth. In this work, we evaluate GPT on four contextual biomedical MRC benchmarks. We experiment with different conventional prompting techniques as well as introduce our own novel prompting method. To solve some of the retrieval problems inherent to LLMs, we propose a prompting strategy named Implicit Retrieval Augmented Generation (RAG) that alleviates the need for using vector databases to retrieve important chunks in traditional RAG setups. Moreover, we report qualitative assessments on the natural language generation outputs from our approach. The results show that our new prompting technique is able to get the best performance in two out of four datasets and ranks second in rest of them. Experiments show that modern-day LLMs like GPT even in a zero-shot setting can outperform supervised models, leading to new state-of-the-art (SoTA) results on two of the benchmarks.
[ "Vatsal, Shubham", "Singh, Ayush" ]
Can {GPT} Redefine Medical Understanding? Evaluating {GPT} on Biomedical Machine Reading Comprehension
bionlp-1.20
Poster
2405.18682v1
https://aclanthology.org/2024.bionlp-1.21.bib
@inproceedings{farzi-etal-2024-get, title = "Get the Best out of 1{B} {LLM}s: Insights from Information Extraction on Clinical Documents", author = "Farzi, Saeed and Ghosh, Soumitra and Lavelli, Alberto and Magnini, Bernardo", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.21", pages = "266--276", abstract = "While the popularity of large, versatile language models like ChatGPT continues to rise, the landscape shifts when considering open-source models tailored to specific domains. Moreover, many areas, such as clinical documents, suffer from a scarcity of training data, often amounting to only a few hundred instances. Additionally, in certain settings, such as hospitals, cloud-based solutions pose privacy concerns, necessitating the deployment of language models on traditional hardware, such as single GPUs or powerful CPUs. To address these complexities, we conduct extensive experiments on both clinical entity detection and relation extraction in clinical documents using 1B parameter models. Our study delves into traditional fine-tuning, continuous pre-training in the medical domain, and instruction-tuning methods, providing valuable insights into their effectiveness in a multilingual setting. Our results underscore the importance of domain-specific models and pre-training for clinical natural language processing tasks. Furthermore, data augmentation using cross-lingual information improves performance in most cases, highlighting the potential for multilingual enhancements.", }
While the popularity of large, versatile language models like ChatGPT continues to rise, the landscape shifts when considering open-source models tailored to specific domains. Moreover, many areas, such as clinical documents, suffer from a scarcity of training data, often amounting to only a few hundred instances. Additionally, in certain settings, such as hospitals, cloud-based solutions pose privacy concerns, necessitating the deployment of language models on traditional hardware, such as single GPUs or powerful CPUs. To address these complexities, we conduct extensive experiments on both clinical entity detection and relation extraction in clinical documents using 1B parameter models. Our study delves into traditional fine-tuning, continuous pre-training in the medical domain, and instruction-tuning methods, providing valuable insights into their effectiveness in a multilingual setting. Our results underscore the importance of domain-specific models and pre-training for clinical natural language processing tasks. Furthermore, data augmentation using cross-lingual information improves performance in most cases, highlighting the potential for multilingual enhancements.
[ "Farzi, Saeed", "Ghosh, Soumitra", "Lavelli, Alberto", "Magnini, Bernardo" ]
Get the Best out of 1{B} {LLM}s: Insights from Information Extraction on Clinical Documents
bionlp-1.21
Poster
2206.14719v2
https://aclanthology.org/2024.bionlp-1.22.bib
@inproceedings{manes-etal-2024-k, title = "K-{QA}: A Real-World Medical {Q}{\&}{A} Benchmark", author = "Manes, Itay and Ronn, Naama and Cohen, David and Ilan Ber, Ran and Horowitz-Kugler, Zehavi and Stanovsky, Gabriel", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.22", pages = "277--294", abstract = "Ensuring the accuracy of responses provided by large language models (LLMs) is crucial, particularly in clinical settings where incorrect information may directly impact patient health. To address this challenge, we construct K-QA, a dataset containing 1,212 patient questions originating from real-world conversations held on a popular clinical online platform. We employ a panel of in-house physicians to answer and manually decompose a subset of K-QA into self-contained statements. Additionally, we formulate two NLI-based evaluation metrics approximating recall and precision: (1) comprehensiveness, measuring the percentage of essential clinical information in the generated answer and (2) hallucination rate, measuring the number of statements from the physician-curated response contradicted by the LLM answer. Finally, we use K-QA along with these metrics to evaluate several state-of-the-art models, as well as the effect of in-context learning and medically-oriented augmented retrieval schemes developed by the authors. Our findings indicate that in-context learning improves the comprehensiveness of the models, and augmented retrieval is effective in reducing hallucinations. We will make K-QA available to to the community to spur research into medically accurate NLP applications.", }
Ensuring the accuracy of responses provided by large language models (LLMs) is crucial, particularly in clinical settings where incorrect information may directly impact patient health. To address this challenge, we construct K-QA, a dataset containing 1,212 patient questions originating from real-world conversations held on a popular clinical online platform. We employ a panel of in-house physicians to answer and manually decompose a subset of K-QA into self-contained statements. Additionally, we formulate two NLI-based evaluation metrics approximating recall and precision: (1) comprehensiveness, measuring the percentage of essential clinical information in the generated answer and (2) hallucination rate, measuring the number of statements from the physician-curated response contradicted by the LLM answer. Finally, we use K-QA along with these metrics to evaluate several state-of-the-art models, as well as the effect of in-context learning and medically-oriented augmented retrieval schemes developed by the authors. Our findings indicate that in-context learning improves the comprehensiveness of the models, and augmented retrieval is effective in reducing hallucinations. We will make K-QA available to to the community to spur research into medically accurate NLP applications.
[ "Manes, Itay", "Ronn, Naama", "Cohen, David", "Ilan Ber, Ran", "Horowitz-Kugler, Zehavi", "Stanovsky, Gabriel" ]
K-{QA}: A Real-World Medical {Q}{\&}{A} Benchmark
bionlp-1.22
Poster
1806.05452v1
https://aclanthology.org/2024.bionlp-1.23.bib
@inproceedings{arsenyan-etal-2024-large, title = "Large Language Models for Biomedical Knowledge Graph Construction: Information extraction from {EMR} notes", author = "Arsenyan, Vahan and Bughdaryan, Spartak and Shaya, Fadi and Small, Kent Wilson and Shahnazaryan, Davit", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.23", pages = "295--317", abstract = "The automatic construction of knowledge graphs (KGs) is an important research area in medicine, with far-reaching applications spanning drug discovery and clinical trial design. These applications hinge on the accurate identification of interactions among medical and biological entities. In this study, we propose an end-to-end machine learning solution based on large language models (LLMs) that utilize electronic medical record notes to construct KGs. The entities used in the KG construction process are diseases, factors, treatments, as well as manifestations that coexist with the patient while experiencing the disease. Given the critical need for high-quality performance in medical applications, we embark on a comprehensive assessment of 12 LLMs of various architectures, evaluating their performance and safety attributes. To gauge the quantitative efficacy of our approach by assessing both precision and recall, we manually annotate a dataset provided by the Macula and Retina Institute. We also assess the qualitative performance of LLMs, such as the ability to generate structured outputs or the tendency to hallucinate. The results illustrate that in contrast to encoder-only and encoder-decoder, decoder-only LLMs require further investigation. Additionally, we provide guided prompt design to utilize such LLMs. The application of the proposed methodology is demonstrated on age-related macular degeneration.", }
The automatic construction of knowledge graphs (KGs) is an important research area in medicine, with far-reaching applications spanning drug discovery and clinical trial design. These applications hinge on the accurate identification of interactions among medical and biological entities. In this study, we propose an end-to-end machine learning solution based on large language models (LLMs) that utilize electronic medical record notes to construct KGs. The entities used in the KG construction process are diseases, factors, treatments, as well as manifestations that coexist with the patient while experiencing the disease. Given the critical need for high-quality performance in medical applications, we embark on a comprehensive assessment of 12 LLMs of various architectures, evaluating their performance and safety attributes. To gauge the quantitative efficacy of our approach by assessing both precision and recall, we manually annotate a dataset provided by the Macula and Retina Institute. We also assess the qualitative performance of LLMs, such as the ability to generate structured outputs or the tendency to hallucinate. The results illustrate that in contrast to encoder-only and encoder-decoder, decoder-only LLMs require further investigation. Additionally, we provide guided prompt design to utilize such LLMs. The application of the proposed methodology is demonstrated on age-related macular degeneration.
[ "Arsenyan, Vahan", "Bughdaryan, Spartak", "Shaya, Fadi", "Small, Kent Wilson", "Shahnazaryan, Davit" ]
Large Language Models for Biomedical Knowledge Graph Construction: Information extraction from {EMR} notes
bionlp-1.23
Poster
2301.12473v2
https://aclanthology.org/2024.bionlp-1.24.bib
@inproceedings{bhattarai-etal-2024-document, title = "Document-level Clinical Entity and Relation extraction via Knowledge Base-Guided Generation", author = "Bhattarai, Kriti and Oh, Inez Y. and Abrams, Zachary B. and Lai, Albert M.", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.24", pages = "318--327", abstract = "Generative pre-trained transformer (GPT) models have shown promise in clinical entity and relation extraction tasks because of their precise extraction and contextual understanding capability. In this work, we further leverage the Unified Medical Language System (UMLS) knowledge base to accurately identify medical concepts and improve clinical entity and relation extraction at the document level. Our framework selects UMLS concepts relevant to the text and combines them with prompts to guide language models in extracting entities. Our experiments demonstrate that this initial concept mapping and the inclusion of these mapped concepts in the prompts improves extraction results compared to few-shot extraction tasks on generic language models that do not leverage UMLS. Further, our results show that this approach is more effective than the standard Retrieval Augmented Generation (RAG) technique, where retrieved data is compared with prompt embeddings to generate results. Overall, we find that integrating UMLS concepts with GPT models significantly improves entity and relation identification, outperforming the baseline and RAG models. By combining the precise concept mapping capability of knowledge-based approaches like UMLS with the contextual understanding capability of GPT, our method highlights the potential of these approaches in specialized domains like healthcare.", }
Generative pre-trained transformer (GPT) models have shown promise in clinical entity and relation extraction tasks because of their precise extraction and contextual understanding capability. In this work, we further leverage the Unified Medical Language System (UMLS) knowledge base to accurately identify medical concepts and improve clinical entity and relation extraction at the document level. Our framework selects UMLS concepts relevant to the text and combines them with prompts to guide language models in extracting entities. Our experiments demonstrate that this initial concept mapping and the inclusion of these mapped concepts in the prompts improves extraction results compared to few-shot extraction tasks on generic language models that do not leverage UMLS. Further, our results show that this approach is more effective than the standard Retrieval Augmented Generation (RAG) technique, where retrieved data is compared with prompt embeddings to generate results. Overall, we find that integrating UMLS concepts with GPT models significantly improves entity and relation identification, outperforming the baseline and RAG models. By combining the precise concept mapping capability of knowledge-based approaches like UMLS with the contextual understanding capability of GPT, our method highlights the potential of these approaches in specialized domains like healthcare.
[ "Bhattarai, Kriti", "Oh, Inez Y.", "Abrams, Zachary B.", "Lai, Albert M." ]
Document-level Clinical Entity and Relation extraction via Knowledge Base-Guided Generation
bionlp-1.24
Poster
2407.10021v1
https://aclanthology.org/2024.bionlp-1.25.bib
@inproceedings{wu-etal-2024-bical, title = "{B}i{CAL}: Bi-directional Contrastive Active Learning for Clinical Report Generation", author = "Wu, Tianyi and Zhang, Jingqing and Bai, Wenjia and Sun, Kai", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.25", pages = "328--341", abstract = "State-of-the-art performance by large pre-trained models in computer vision (CV) and natural language processing (NLP) suggests their potential for domain-specific tasks. However, training these models requires vast amounts of labelled data, a challenge in many domains due to the cost and expertise required for data labelling. Active Learning (AL) can mitigate this by selecting minimal yet informative data for model training. While AL has been mainly applied to single-modal tasks in the fields of NLP and CV, its application in multi-modal tasks remains underexplored. In this work, we proposed a novel AL strategy, Bidirectional Contrastive Active Learning strategy (BiCAL), that used both image and text latent spaces to identify contrastive samples to select batches to query for labels. BiCAL was robust to class imbalance data problems by its design, which is a problem that is commonly seen in training domain-specific models. We assessed BiCAL{'}s performance in domain-specific learning on the clinical report generation tasks from chest X-ray images. Our experiments showed that BiCAL outperforms State-of-the-art methods in clinical efficacy metrics, improving recall by 2.4{\%} and F1 score by 9.5{\%}, showcasing its effectiveness in actively training domain-specific multi-modal models.", }
State-of-the-art performance by large pre-trained models in computer vision (CV) and natural language processing (NLP) suggests their potential for domain-specific tasks. However, training these models requires vast amounts of labelled data, a challenge in many domains due to the cost and expertise required for data labelling. Active Learning (AL) can mitigate this by selecting minimal yet informative data for model training. While AL has been mainly applied to single-modal tasks in the fields of NLP and CV, its application in multi-modal tasks remains underexplored. In this work, we proposed a novel AL strategy, Bidirectional Contrastive Active Learning strategy (BiCAL), that used both image and text latent spaces to identify contrastive samples to select batches to query for labels. BiCAL was robust to class imbalance data problems by its design, which is a problem that is commonly seen in training domain-specific models. We assessed BiCAL{'}s performance in domain-specific learning on the clinical report generation tasks from chest X-ray images. Our experiments showed that BiCAL outperforms State-of-the-art methods in clinical efficacy metrics, improving recall by 2.4{\%} and F1 score by 9.5{\%}, showcasing its effectiveness in actively training domain-specific multi-modal models.
[ "Wu, Tianyi", "Zhang, Jingqing", "Bai, Wenjia", "Sun, Kai" ]
{B}i{CAL}: Bi-directional Contrastive Active Learning for Clinical Report Generation
bionlp-1.25
Poster
2310.07355v3
https://aclanthology.org/2024.bionlp-1.26.bib
@inproceedings{singh-etal-2024-generation, title = "Generation and De-Identification of {I}ndian Clinical Discharge Summaries using {LLM}s", author = "Singh, Sanjeet and Gupta, Shreya and Gupta, Niralee and Sharma, Naimish and Srivastava, Lokesh and Agarwal, Vibhu and Modi, Ashutosh", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.26", pages = "342--362", abstract = "The consequences of a healthcare data breach can be devastating for the patients, providers, and payers. The average financial impact of a data breach in recent months has been estimated to be close to USD 10 million. This is especially significant for healthcare organizations in India that are managing rapid digitization while still establishing data governance procedures that align with the letter and spirit of the law. Computer-based systems for de-identification of personal information are vulnerable to data drift, often rendering them ineffective in cross-institution settings. Therefore, a rigorous assessment of existing de-identification against local health datasets is imperative to support the safe adoption of digital health initiatives in India. Using a small set of de-identified patient discharge summaries provided by an Indian healthcare institution, in this paper, we report the nominal performance of de-identification algorithms (based on language models) trained on publicly available non-Indian datasets, pointing towards a lack of cross-institutional generalization. Similarly, experimentation with off-the-shelf de-identification systems reveals potential risks associated with the approach. To overcome data scarcity, we explore generating synthetic clinical reports (using publicly available and Indian summaries) by performing in-context learning over Large Language Models (LLMs). Our experiments demonstrate the use of generated reports as an effective strategy for creating high-performing de-identification systems with good generalization capabilities.", }
The consequences of a healthcare data breach can be devastating for the patients, providers, and payers. The average financial impact of a data breach in recent months has been estimated to be close to USD 10 million. This is especially significant for healthcare organizations in India that are managing rapid digitization while still establishing data governance procedures that align with the letter and spirit of the law. Computer-based systems for de-identification of personal information are vulnerable to data drift, often rendering them ineffective in cross-institution settings. Therefore, a rigorous assessment of existing de-identification against local health datasets is imperative to support the safe adoption of digital health initiatives in India. Using a small set of de-identified patient discharge summaries provided by an Indian healthcare institution, in this paper, we report the nominal performance of de-identification algorithms (based on language models) trained on publicly available non-Indian datasets, pointing towards a lack of cross-institutional generalization. Similarly, experimentation with off-the-shelf de-identification systems reveals potential risks associated with the approach. To overcome data scarcity, we explore generating synthetic clinical reports (using publicly available and Indian summaries) by performing in-context learning over Large Language Models (LLMs). Our experiments demonstrate the use of generated reports as an effective strategy for creating high-performing de-identification systems with good generalization capabilities.
[ "Singh, Sanjeet", "Gupta, Shreya", "Gupta, Niralee", "Sharma, Naimish", "Srivastava, Lokesh", "Agarwal, Vibhu", "Modi, Ashutosh" ]
Generation and De-Identification of {I}ndian Clinical Discharge Summaries using {LLM}s
bionlp-1.26
Poster
2407.05887v1
https://aclanthology.org/2024.bionlp-1.27.bib
@inproceedings{lai-king-paroubek-2024-pre, title = "Pre-training data selection for biomedical domain adaptation using journal impact metrics", author = "Lai-king, Mathieu and Paroubek, Patrick", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.27", pages = "363--369", abstract = "Domain adaptation is a widely used method in natural language processing (NLP) to improve the performance of a language model within a specific domain. This method is particularly common in the biomedical domain, which sees regular publication of numerous scientific articles. PubMed, a significant corpus of text, is frequently used in the biomedical domain. The primary objective of this study is to explore whether refining a pre-training dataset using specific quality metrics for scientific papers can enhance the performance of the resulting model. To accomplish this, we employ two straightforward journal impact metrics and conduct experiments by continually pre-training BERT on various subsets of the complete PubMed training set, we then evaluate the resulting models on biomedical language understanding tasks from the BLURB benchmark. Our results show that pruning using journal impact metrics is not efficient. But we also show that pre-training using fewer abstracts (but with the same number of training steps) does not necessarily decrease the resulting model{'}s performance.", }
Domain adaptation is a widely used method in natural language processing (NLP) to improve the performance of a language model within a specific domain. This method is particularly common in the biomedical domain, which sees regular publication of numerous scientific articles. PubMed, a significant corpus of text, is frequently used in the biomedical domain. The primary objective of this study is to explore whether refining a pre-training dataset using specific quality metrics for scientific papers can enhance the performance of the resulting model. To accomplish this, we employ two straightforward journal impact metrics and conduct experiments by continually pre-training BERT on various subsets of the complete PubMed training set, we then evaluate the resulting models on biomedical language understanding tasks from the BLURB benchmark. Our results show that pruning using journal impact metrics is not efficient. But we also show that pre-training using fewer abstracts (but with the same number of training steps) does not necessarily decrease the resulting model{'}s performance.
[ "Lai-king, Mathieu", "Paroubek, Patrick" ]
Pre-training data selection for biomedical domain adaptation using journal impact metrics
bionlp-1.27
Poster
2308.02505v1
https://aclanthology.org/2024.bionlp-1.28.bib
@inproceedings{park-etal-2024-leveraging, title = "Leveraging {LLM}s and Web-based Visualizations for Profiling Bacterial Host Organisms and Genetic Toolboxes", author = "Park, Gilchan and Mutalik, Vivek and Neely, Christopher and Soto, Carlos and Yoo, Shinjae and Dehal, Paramvir", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.28", pages = "370--379", abstract = "Building genetic tools to engineer microorganisms is at the core of understanding and redesigning natural biological systems for useful purposes. Every project to build such a genetic toolbox for an organism starts with a survey of available tools. Despite a decade-long investment and advancement in the field, it is still challenging to mine information about a genetic tool published in the literature and connect that information to microbial genomics and other microbial databases. This information gap not only limits our ability to identify and adopt available tools to a new chassis but also conceals available opportunities to engineer a new microbial host. Recent advances in natural language processing (NLP), particularly large language models (LLMs), offer solutions by enabling efficient extraction of genetic terms and biological entities from a vast array of publications. This work present a method to automate this process, using text-mining to refine models with data from bioRxiv and other databases. We evaluated various LLMs to investigate their ability to recognize bacterial host organisms and genetic toolboxes for engineering. We demonstrate our methodology with a web application that integrates a conversational LLM and visualization tool, connecting user inquiries to genetic resources and literature findings, thereby saving researchers time, money and effort in their laboratory work.", }
Building genetic tools to engineer microorganisms is at the core of understanding and redesigning natural biological systems for useful purposes. Every project to build such a genetic toolbox for an organism starts with a survey of available tools. Despite a decade-long investment and advancement in the field, it is still challenging to mine information about a genetic tool published in the literature and connect that information to microbial genomics and other microbial databases. This information gap not only limits our ability to identify and adopt available tools to a new chassis but also conceals available opportunities to engineer a new microbial host. Recent advances in natural language processing (NLP), particularly large language models (LLMs), offer solutions by enabling efficient extraction of genetic terms and biological entities from a vast array of publications. This work present a method to automate this process, using text-mining to refine models with data from bioRxiv and other databases. We evaluated various LLMs to investigate their ability to recognize bacterial host organisms and genetic toolboxes for engineering. We demonstrate our methodology with a web application that integrates a conversational LLM and visualization tool, connecting user inquiries to genetic resources and literature findings, thereby saving researchers time, money and effort in their laboratory work.
[ "Park, Gilchan", "Mutalik, Vivek", "Neely, Christopher", "Soto, Carlos", "Yoo, Shinjae", "Dehal, Paramvir" ]
Leveraging {LLM}s and Web-based Visualizations for Profiling Bacterial Host Organisms and Genetic Toolboxes
bionlp-1.28
Poster
1211.3367v2
https://aclanthology.org/2024.bionlp-1.29.bib
@inproceedings{shlyk-etal-2024-real, title = "{REAL}: A Retrieval-Augmented Entity Linking Approach for Biomedical Concept Recognition", author = "Shlyk, Darya and Groza, Tudor and Mesiti, Marco and Montanelli, Stefano and Cavalleri, Emanuele", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.29", pages = "380--389", abstract = "Large Language Models (LLMs) offer an appealing alternative to training dedicated models for many Natural Language Processing (NLP) tasks. However, outdated knowledge and hallucination issues can be major obstacles in their application in knowledge-intensive biomedical scenarios. In this study, we consider the task of biomedical concept recognition (CR) from unstructured scientific literature and explore the use of Retrieval Augmented Generation (RAG) to improve accuracy and reliability of the LLM-based biomedical CR. Our approach, named REAL (Retrieval Augmented Entity Linking), combines the generative capabilities of LLMs with curated knowledge bases to automatically annotate natural language texts with concepts from bio-ontologies. By applying REAL to benchmark corpora on phenotype concept recognition, we show its effectiveness in improving LLM-based CR performance. This research highlights the potential of combining LLMs with external knowledge sources to advance biomedical text processing.", }
Large Language Models (LLMs) offer an appealing alternative to training dedicated models for many Natural Language Processing (NLP) tasks. However, outdated knowledge and hallucination issues can be major obstacles in their application in knowledge-intensive biomedical scenarios. In this study, we consider the task of biomedical concept recognition (CR) from unstructured scientific literature and explore the use of Retrieval Augmented Generation (RAG) to improve accuracy and reliability of the LLM-based biomedical CR. Our approach, named REAL (Retrieval Augmented Entity Linking), combines the generative capabilities of LLMs with curated knowledge bases to automatically annotate natural language texts with concepts from bio-ontologies. By applying REAL to benchmark corpora on phenotype concept recognition, we show its effectiveness in improving LLM-based CR performance. This research highlights the potential of combining LLMs with external knowledge sources to advance biomedical text processing.
[ "Shlyk, Darya", "Groza, Tudor", "Mesiti, Marco", "Montanelli, Stefano", "Cavalleri, Emanuele" ]
{REAL}: A Retrieval-Augmented Entity Linking Approach for Biomedical Concept Recognition
bionlp-1.29
Poster
2405.11941v1
https://aclanthology.org/2024.bionlp-1.30.bib
@inproceedings{hur-etal-2024-right, title = "Is That the Right Dose? Investigating Generative Language Model Performance on Veterinary Prescription Text Analysis", author = "Hur, Brian and Wang, Lucy Lu and Hardefeldt, Laura and Yetisgen, Meliha", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.30", pages = "390--397", abstract = "Optimizing antibiotic dosing recommendations is a vital aspect of antimicrobial stewardship (AMS) programs aimed at combating antimicrobial resistance (AMR), a significant public health concern, where inappropriate dosing contributes to the selection of AMR pathogens. A key challenge is the extraction of dosing information, which is embedded in free-text clinical records and necessitates numerical transformations. This paper assesses the utility of Large Language Models (LLMs) in extracting essential prescription attributes such as dose, duration, active ingredient, and indication. We evaluate methods to optimize LLMs on this task against a baseline BERT-based ensemble model. Our findings reveal that LLMs can achieve exceptional accuracy by combining probabilistic predictions with deterministic calculations, enforced through functional prompting, to ensure data types and execute necessary arithmetic. This research demonstrates new prospects for automating aspects of AMS when no training data is available.", }
Optimizing antibiotic dosing recommendations is a vital aspect of antimicrobial stewardship (AMS) programs aimed at combating antimicrobial resistance (AMR), a significant public health concern, where inappropriate dosing contributes to the selection of AMR pathogens. A key challenge is the extraction of dosing information, which is embedded in free-text clinical records and necessitates numerical transformations. This paper assesses the utility of Large Language Models (LLMs) in extracting essential prescription attributes such as dose, duration, active ingredient, and indication. We evaluate methods to optimize LLMs on this task against a baseline BERT-based ensemble model. Our findings reveal that LLMs can achieve exceptional accuracy by combining probabilistic predictions with deterministic calculations, enforced through functional prompting, to ensure data types and execute necessary arithmetic. This research demonstrates new prospects for automating aspects of AMS when no training data is available.
[ "Hur, Brian", "Wang, Lucy Lu", "Hardefeldt, Laura", "Yetisgen, Meliha" ]
Is That the Right Dose? Investigating Generative Language Model Performance on Veterinary Prescription Text Analysis
bionlp-1.30
Poster
0809.3047v1
https://aclanthology.org/2024.bionlp-1.31.bib
@inproceedings{hogan-etal-2024-midred, title = "{M}i{DRED}: An Annotated Corpus for Microbiome Knowledge Base Construction", author = "Hogan, William and Bartko, Andrew and Shang, Jingbo and Hsu, Chun-Nan", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.31", pages = "398--408", abstract = "The interplay between microbiota and diseases has emerged as a significant area of research facilitated by the proliferation of cost-effective and precise sequencing technologies. To keep track of the many findings, domain experts manually review publications to extract reported microbe-disease associations and compile them into knowledge bases. However, manual curation efforts struggle to keep up with the pace of publications. Relation extraction has demonstrated remarkable success in other domains, yet the availability of datasets supporting such methods within the domain of microbiome research remains limited. To bridge this gap, we introduce the Microbe-Disease Relation Extraction Dataset (MiDRED); a human-annotated dataset containing 3,116 annotations of fine-grained relationships between microbes and diseases. We hope this dataset will help address the scarcity of data in this crucial domain and facilitate the development of advanced text-mining solutions to automate the creation and maintenance of microbiome knowledge bases.", }
The interplay between microbiota and diseases has emerged as a significant area of research facilitated by the proliferation of cost-effective and precise sequencing technologies. To keep track of the many findings, domain experts manually review publications to extract reported microbe-disease associations and compile them into knowledge bases. However, manual curation efforts struggle to keep up with the pace of publications. Relation extraction has demonstrated remarkable success in other domains, yet the availability of datasets supporting such methods within the domain of microbiome research remains limited. To bridge this gap, we introduce the Microbe-Disease Relation Extraction Dataset (MiDRED); a human-annotated dataset containing 3,116 annotations of fine-grained relationships between microbes and diseases. We hope this dataset will help address the scarcity of data in this crucial domain and facilitate the development of advanced text-mining solutions to automate the creation and maintenance of microbiome knowledge bases.
[ "Hogan, William", "Bartko, Andrew", "Shang, Jingbo", "Hsu, Chun-Nan" ]
{M}i{DRED}: An Annotated Corpus for Microbiome Knowledge Base Construction
bionlp-1.31
Poster
2008.00351v1
https://aclanthology.org/2024.bionlp-1.32.bib
@inproceedings{mahendra-etal-2024-numbers, title = "Do Numbers Matter? Types and Prevalence of Numbers in Clinical Texts", author = "Mahendra, Rahmad and Spina, Damiano and Cavedon, Lawrence and Verspoor, Karin", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.32", pages = "409--415", abstract = "In this short position paper, we highlight the importance of numbers in clinical text. We first present a taxonomy of number variants. We then perform corpus analysis to analyze characteristics of number use in several clinical corpora. Based on our findings of extensive use of numbers, and limited understanding of the impact of numbers on clinical NLP tasks, we identify the need for a public benchmark that will support investigation of numerical processing tasks for the clinical domain.", }
In this short position paper, we highlight the importance of numbers in clinical text. We first present a taxonomy of number variants. We then perform corpus analysis to analyze characteristics of number use in several clinical corpora. Based on our findings of extensive use of numbers, and limited understanding of the impact of numbers on clinical NLP tasks, we identify the need for a public benchmark that will support investigation of numerical processing tasks for the clinical domain.
[ "Mahendra, Rahmad", "Spina, Damiano", "Cavedon, Lawrence", "Verspoor, Karin" ]
Do Numbers Matter? Types and Prevalence of Numbers in Clinical Texts
bionlp-1.32
Poster
2005.14336v1
https://aclanthology.org/2024.bionlp-1.33.bib
@inproceedings{liang-etal-2024-fine, title = "A Fine-grained citation graph for biomedical academic papers: the finding-citation graph", author = "Liang, Yuan and Poesio, Massimo and Rezvani, Roonak", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.33", pages = "416--426", abstract = "Citations typically mention findings as well as papers. To model this richer notion of citation, we introduce a richer form of citation graph with nodes for both academic papers and their findings: the finding-citation graph (FCG). We also present a new pipeline to construct such a graph, which includes a finding identification module and a citation sentence extraction module. From each paper, it extracts rich basic information, abstract, and structured full text first. The abstract and vital sections, such as the results and discussion, are input into the finding identification module. This module identifies multiple findings from a paper, achieving an 80{\%} accuracy in multiple findings evaluation. The full text is input into the citation sentence extraction module to identify inline citation sentences and citation markers, achieving 97.7{\%} accuracy. Then, the graph is constructed using the outputs from the two modules mentioned above. We used the Europe PMC to build such a graph using the pipeline, resulting in a graph with 14.25 million nodes and 76 million edges.", }
Citations typically mention findings as well as papers. To model this richer notion of citation, we introduce a richer form of citation graph with nodes for both academic papers and their findings: the finding-citation graph (FCG). We also present a new pipeline to construct such a graph, which includes a finding identification module and a citation sentence extraction module. From each paper, it extracts rich basic information, abstract, and structured full text first. The abstract and vital sections, such as the results and discussion, are input into the finding identification module. This module identifies multiple findings from a paper, achieving an 80{\%} accuracy in multiple findings evaluation. The full text is input into the citation sentence extraction module to identify inline citation sentences and citation markers, achieving 97.7{\%} accuracy. Then, the graph is constructed using the outputs from the two modules mentioned above. We used the Europe PMC to build such a graph using the pipeline, resulting in a graph with 14.25 million nodes and 76 million edges.
[ "Liang, Yuan", "Poesio, Massimo", "Rezvani, Roonak" ]
A Fine-grained citation graph for biomedical academic papers: the finding-citation graph
bionlp-1.33
Poster
2111.00172v1
https://aclanthology.org/2024.bionlp-1.34.bib
@inproceedings{engel-park-2024-evaluating, title = "Evaluating Large Language Models for Predicting Protein Behavior under Radiation Exposure and Disease Conditions", author = "Engel, Ryan and Park, Gilchan", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.34", pages = "427--439", abstract = "The primary concern with exposure to ionizing radiation is the risk of developing diseases. While high doses of radiation can cause immediate damage leading to cancer, the effects of low-dose radiation (LDR) are less clear and more controversial. To further investigate this, it necessitates focusing on the underlying biological structures affected by radiation. Recent work has shown that Large Language Models (LLMs) can effectively predict protein structures and other biological properties. The aim of this research is to utilize open-source LLMs, such as Mistral, Llama 2, and Llama 3, to predict both radiation-induced alterations in proteins and the dynamics of protein-protein interactions (PPIs) within the presence of specific diseases. We show that fine-tuning these models yields state-of-the-art performance for predicting protein interactions in the context of neurodegenerative diseases, metabolic disorders, and cancer. Our findings contribute to the ongoing efforts to understand the complex relationships between radiation exposure and disease mechanisms, illustrating the nuanced capabilities and limitations of current computational models.", }
The primary concern with exposure to ionizing radiation is the risk of developing diseases. While high doses of radiation can cause immediate damage leading to cancer, the effects of low-dose radiation (LDR) are less clear and more controversial. To further investigate this, it necessitates focusing on the underlying biological structures affected by radiation. Recent work has shown that Large Language Models (LLMs) can effectively predict protein structures and other biological properties. The aim of this research is to utilize open-source LLMs, such as Mistral, Llama 2, and Llama 3, to predict both radiation-induced alterations in proteins and the dynamics of protein-protein interactions (PPIs) within the presence of specific diseases. We show that fine-tuning these models yields state-of-the-art performance for predicting protein interactions in the context of neurodegenerative diseases, metabolic disorders, and cancer. Our findings contribute to the ongoing efforts to understand the complex relationships between radiation exposure and disease mechanisms, illustrating the nuanced capabilities and limitations of current computational models.
[ "Engel, Ryan", "Park, Gilchan" ]
Evaluating Large Language Models for Predicting Protein Behavior under Radiation Exposure and Disease Conditions
bionlp-1.34
Poster
1705.05090v1
https://aclanthology.org/2024.bionlp-1.35.bib
@inproceedings{thawakar-etal-2024-xraygpt, title = "{X}ray{GPT}: Chest Radiographs Summarization using Large Medical Vision-Language Models", author = "Thawakar, Omkar Chakradhar and Shaker, Abdelrahman M. and Mullappilly, Sahal Shaji and Cholakkal, Hisham and Anwer, Rao Muhammad and Khan, Salman and Laaksonen, Jorma and Khan, Fahad", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.35", pages = "440--448", abstract = "The latest breakthroughs in large language models (LLMs) and vision-language models (VLMs) have showcased promising capabilities toward performing a wide range of tasks. Such models are typically trained on massive datasets comprising billions of image-text pairs with diverse tasks. However, their performance on task-specific domains, such as radiology, is still under-explored. While few works have recently explored LLMs-based conversational medical models, they mainly focus on text-based analysis. In this paper, we introduce XrayGPT, a conversational medical vision-language (VLMs) model that can analyze and answer open-ended questions about chest radiographs. Specifically, we align both medical visual encoder with a fine-tuned LLM to possess visual conversation abilities, grounded in an understanding of radiographs and medical knowledge. For improved alignment of chest radiograph data, we generate {\textasciitilde}217k interactive and high-quality summaries from free-text radiology reports. Extensive experiments are conducted to validate the merits of XrayGPT. To conduct an expert evaluation, certified medical doctors evaluated the output of our XrayGPT on a test subset and the results reveal that more than 70{\%} of the responses are scientifically accurate, with an average score of 4/5. We hope our simple and effective method establishes a solid baseline, facilitating future research toward automated analysis and summarization of chest radiographs. Code, models, and instruction sets will be publicly released.", }
The latest breakthroughs in large language models (LLMs) and vision-language models (VLMs) have showcased promising capabilities toward performing a wide range of tasks. Such models are typically trained on massive datasets comprising billions of image-text pairs with diverse tasks. However, their performance on task-specific domains, such as radiology, is still under-explored. While few works have recently explored LLMs-based conversational medical models, they mainly focus on text-based analysis. In this paper, we introduce XrayGPT, a conversational medical vision-language (VLMs) model that can analyze and answer open-ended questions about chest radiographs. Specifically, we align both medical visual encoder with a fine-tuned LLM to possess visual conversation abilities, grounded in an understanding of radiographs and medical knowledge. For improved alignment of chest radiograph data, we generate {\textasciitilde}217k interactive and high-quality summaries from free-text radiology reports. Extensive experiments are conducted to validate the merits of XrayGPT. To conduct an expert evaluation, certified medical doctors evaluated the output of our XrayGPT on a test subset and the results reveal that more than 70{\%} of the responses are scientifically accurate, with an average score of 4/5. We hope our simple and effective method establishes a solid baseline, facilitating future research toward automated analysis and summarization of chest radiographs. Code, models, and instruction sets will be publicly released.
[ "Thawakar, Omkar Chakradhar", "Shaker, Abdelrahman M.", "Mullappilly, Sahal Shaji", "Cholakkal, Hisham", "Anwer, Rao Muhammad", "Khan, Salman", "Laaksonen, Jorma", "Khan, Fahad" ]
{X}ray{GPT}: Chest Radiographs Summarization using Large Medical Vision-Language Models
bionlp-1.35
Poster
2306.07971v1
https://aclanthology.org/2024.bionlp-1.36.bib
@inproceedings{sanchez-carmona-etal-2024-multilevel-analysis, title = "Multilevel Analysis of Biomedical Domain Adaptation of Llama 2: What Matters the Most? A Case Study", author = "Sanchez Carmona, Vicente Ivan and Jiang, Shanshan and Suzuki, Takeshi and Dong, Bin", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.36", pages = "449--456", abstract = "Domain adaptation of Large Language Models (LLMs) leads to models better suited for a particular domain by capturing patterns from domain text which leads to improvements in downstream tasks. To the naked eye, these improvements are visible; however, the patterns are not so. How can we know which patterns and how much they contribute to changes in downstream scores? Through a Multilevel Analysis we discover and quantify the effect of text patterns on downstream scores of domain-adapted Llama 2 for the task of sentence similarity (BIOSSES dataset). We show that text patterns from PubMed abstracts such as clear writing and simplicity, as well as the amount of biomedical information, are the key for improving downstream scores. Also, we show how another factor not usually quantified contributes equally to downstream scores: choice of hyperparameters for both domain adaptation and fine-tuning.", }
Domain adaptation of Large Language Models (LLMs) leads to models better suited for a particular domain by capturing patterns from domain text which leads to improvements in downstream tasks. To the naked eye, these improvements are visible; however, the patterns are not so. How can we know which patterns and how much they contribute to changes in downstream scores? Through a Multilevel Analysis we discover and quantify the effect of text patterns on downstream scores of domain-adapted Llama 2 for the task of sentence similarity (BIOSSES dataset). We show that text patterns from PubMed abstracts such as clear writing and simplicity, as well as the amount of biomedical information, are the key for improving downstream scores. Also, we show how another factor not usually quantified contributes equally to downstream scores: choice of hyperparameters for both domain adaptation and fine-tuning.
[ "Sanchez Carmona, Vicente Ivan", "Jiang, Shanshan", "Suzuki, Takeshi", "Dong, Bin" ]
Multilevel Analysis of Biomedical Domain Adaptation of Llama 2: What Matters the Most? A Case Study
bionlp-1.36
Poster
2303.02895v3
https://aclanthology.org/2024.bionlp-1.37.bib
@inproceedings{el-khettari-etal-2024-mention, title = "Mention-Agnostic Information Extraction for Ontological Annotation of Biomedical Articles", author = "El Khettari, Oumaima and Nishida, Noriki and Liu, Shanshan and Munne, Rumana Ferdous and Yamagata, Yuki and Quiniou, Solen and Chaffron, Samuel and Matsumoto, Yuji", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.37", pages = "457--473", abstract = "Biomedical information extraction is crucial for advancing research, enhancing healthcare, and discovering treatments by efficiently analyzing extensive data. Given the extensive amount of biomedical data available, automated information extraction methods are necessary due to manual extraction{'}s labor-intensive, expertise-dependent, and costly nature. In this paper, we propose a novel two-stage system for information extraction where we annotate biomedical articles based on a specific ontology (HOIP). The major challenge is annotating relation between biomedical processes often not explicitly mentioned in text articles. Here, we first predict the candidate processes and then determine the relationships between these processes. The experimental results show promising outcomes in mention-agnostic process identification using Large Language Models (LLMs). In relation classification, BERT-based supervised models still outperform LLMs significantly. The end-to-end evaluation results suggest the difficulty of this task and room for improvement in both process identification and relation classification.", }
Biomedical information extraction is crucial for advancing research, enhancing healthcare, and discovering treatments by efficiently analyzing extensive data. Given the extensive amount of biomedical data available, automated information extraction methods are necessary due to manual extraction{'}s labor-intensive, expertise-dependent, and costly nature. In this paper, we propose a novel two-stage system for information extraction where we annotate biomedical articles based on a specific ontology (HOIP). The major challenge is annotating relation between biomedical processes often not explicitly mentioned in text articles. Here, we first predict the candidate processes and then determine the relationships between these processes. The experimental results show promising outcomes in mention-agnostic process identification using Large Language Models (LLMs). In relation classification, BERT-based supervised models still outperform LLMs significantly. The end-to-end evaluation results suggest the difficulty of this task and room for improvement in both process identification and relation classification.
[ "El Khettari, Oumaima", "Nishida, Noriki", "Liu, Shanshan", "Munne, Rumana Ferdous", "Yamagata, Yuki", "Quiniou, Solen", "Chaffron, Samuel", "Matsumoto, Yuji" ]
Mention-Agnostic Information Extraction for Ontological Annotation of Biomedical Articles
bionlp-1.37
Poster
2001.07139v2
https://aclanthology.org/2024.bionlp-1.38.bib
@inproceedings{rubchinsky-etal-2024-automatic, title = "Automatic Extraction of Disease Risk Factors from Medical Publications", author = "Rubchinsky, Maxim and Rabinovich, Ella and Shribman, Adi and Golan, Netanel and Sahar, Tali and Shweiki, Dorit", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.38", pages = "474--485", abstract = "We present a novel approach to automating the identification of risk factors for diseases from medical literature, leveraging pre-trained models in the bio-medical domain, while tuning them for the specific task. Faced with the challenges of the diverse and unstructured nature of medical articles, our study introduces a multi-step system to first identify relevant articles, then classify them based on the presence of risk factor discussions and, finally, extract specific risk factor information for a disease through a question-answering model. Our contributions include the development of a comprehensive pipeline for the automated extraction of risk factors and the compilation of several datasets, which can serve as valuable resources for further research in this area. These datasets encompass a wide range of diseases, as well as their associated risk factors, meticulously identified and validated through a fine-grained evaluation scheme. We conducted both automatic and thorough manual evaluation, demonstrating encouraging results. We also highlight the importance of improving models and expanding dataset comprehensiveness to keep pace with the rapidly evolving field of medical research.", }
We present a novel approach to automating the identification of risk factors for diseases from medical literature, leveraging pre-trained models in the bio-medical domain, while tuning them for the specific task. Faced with the challenges of the diverse and unstructured nature of medical articles, our study introduces a multi-step system to first identify relevant articles, then classify them based on the presence of risk factor discussions and, finally, extract specific risk factor information for a disease through a question-answering model. Our contributions include the development of a comprehensive pipeline for the automated extraction of risk factors and the compilation of several datasets, which can serve as valuable resources for further research in this area. These datasets encompass a wide range of diseases, as well as their associated risk factors, meticulously identified and validated through a fine-grained evaluation scheme. We conducted both automatic and thorough manual evaluation, demonstrating encouraging results. We also highlight the importance of improving models and expanding dataset comprehensiveness to keep pace with the rapidly evolving field of medical research.
[ "Rubchinsky, Maxim", "Rabinovich, Ella", "Shribman, Adi", "Golan, Netanel", "Sahar, Tali", "Shweiki, Dorit" ]
Automatic Extraction of Disease Risk Factors from Medical Publications
bionlp-1.38
Poster
2407.07373v1
https://aclanthology.org/2024.bionlp-1.39.bib
@inproceedings{pu-etal-2024-intervention, title = "Intervention extraction in preclinical animal studies of {A}lzheimer{'}s Disease: Enhancing regex performance with language model-based filtering", author = "Pu, Yiyuan and Hair, Kaitlyn and Beck, Daniel and Conway, Mike and MacLeod, Malcolm and Verspoor, Karin", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.39", pages = "486--492", abstract = "We explore different information extraction tools for annotation of interventions to support automated systematic reviews of preclinical AD animal studies. We compare two PICO (Population, Intervention, Comparison, and Outcome) extraction tools and two prompting-based learning strategies based on Large Language Models (LLMs). Motivated by the high recall of a dictionary-based approach, we define a two-stage method, removing false positives obtained from regexes with a pre-trained LM. With ChatGPT-based filtering using three-shot prompting, our approach reduces almost two-thirds of False Positives compared to the dictionary approach alone, while outperforming knowledge-free instructional prompting.", }
We explore different information extraction tools for annotation of interventions to support automated systematic reviews of preclinical AD animal studies. We compare two PICO (Population, Intervention, Comparison, and Outcome) extraction tools and two prompting-based learning strategies based on Large Language Models (LLMs). Motivated by the high recall of a dictionary-based approach, we define a two-stage method, removing false positives obtained from regexes with a pre-trained LM. With ChatGPT-based filtering using three-shot prompting, our approach reduces almost two-thirds of False Positives compared to the dictionary approach alone, while outperforming knowledge-free instructional prompting.
[ "Pu, Yiyuan", "Hair, Kaitlyn", "Beck, Daniel", "Conway, Mike", "MacLeod, Malcolm", "Verspoor, Karin" ]
Intervention extraction in preclinical animal studies of {A}lzheimer{'}s Disease: Enhancing regex performance with language model-based filtering
bionlp-1.39
Poster
2105.04397v1
https://aclanthology.org/2024.bionlp-1.40.bib
@inproceedings{achara-etal-2024-efficient, title = "Efficient Biomedical Entity Linking: Clinical Text Standardization with Low-Resource Techniques", author = "Achara, Akshit and Sasidharan, Sanand and N, Gagan", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.40", pages = "493--505", abstract = "Clinical text is rich in information, with mentions of treatment, medication and anatomy among many other clinical terms. Multiple terms can refer to the same core concepts which can be referred as a clinical entity. Ontologies like the Unified Medical Language System (UMLS) are developed and maintained to store millions of clinical entities including the definitions, relations and other corresponding information. These ontologies are used for standardization of clinical text by normalizing varying surface forms of a clinical term through Biomedical entity linking. With the introduction of transformer-based language models, there has been significant progress in Biomedical entity linking. In this work, we focus on learning through synonym pairs associated with the entities. As compared to the existing approaches, our approach significantly reduces the training data and resource consumption. Moreover, we propose a suite of context-based and context-less reranking techniques for performing the entity disambiguation. Overall, we achieve similar performance to the state-of-the-art zero-shot and distant supervised entity linking techniques on the Medmentions dataset, the largest annotated dataset on UMLS, without any domain-based training. Finally, we show that retrieval performance alone might not be sufficient as an evaluation metric and introduce an article level quantitative and qualitative analysis to reveal further insights on the performance of entity linking methods.", }
Clinical text is rich in information, with mentions of treatment, medication and anatomy among many other clinical terms. Multiple terms can refer to the same core concepts which can be referred as a clinical entity. Ontologies like the Unified Medical Language System (UMLS) are developed and maintained to store millions of clinical entities including the definitions, relations and other corresponding information. These ontologies are used for standardization of clinical text by normalizing varying surface forms of a clinical term through Biomedical entity linking. With the introduction of transformer-based language models, there has been significant progress in Biomedical entity linking. In this work, we focus on learning through synonym pairs associated with the entities. As compared to the existing approaches, our approach significantly reduces the training data and resource consumption. Moreover, we propose a suite of context-based and context-less reranking techniques for performing the entity disambiguation. Overall, we achieve similar performance to the state-of-the-art zero-shot and distant supervised entity linking techniques on the Medmentions dataset, the largest annotated dataset on UMLS, without any domain-based training. Finally, we show that retrieval performance alone might not be sufficient as an evaluation metric and introduce an article level quantitative and qualitative analysis to reveal further insights on the performance of entity linking methods.
[ "Achara, Akshit", "Sasidharan, San", "", "N, Gagan" ]
Efficient Biomedical Entity Linking: Clinical Text Standardization with Low-Resource Techniques
bionlp-1.40
Poster
2405.15134v2
https://aclanthology.org/2024.bionlp-1.41.bib
@inproceedings{ravichandran-etal-2024-xai, title = "{XAI} for Better Exploitation of Text in Medical Decision Support", author = {Ravichandran, Ajay Madhavan and Grune, Julianna and Feldhus, Nils and Burchardt, Aljoscha and Roller, Roland and M{\"o}ller, Sebastian}, editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.41", pages = "506--513", abstract = "In electronic health records, text data is considered a valuable resource as it complements a medical history and may contain information that cannot be easily included in tables. But why does the inclusion of clinical texts as additional input into multimodal models, not always significantly improve the performance of medical decision-support systems? Explainable AI (XAI) might provide the answer. We examine which information in text and structured data influences the performance of models in the context of multimodal decision support for biomedical tasks. Using data from an intensive care unit and targeting a mortality prediction task, we compare information that has been considered relevant by XAI methods to the opinion of a physician.", }
In electronic health records, text data is considered a valuable resource as it complements a medical history and may contain information that cannot be easily included in tables. But why does the inclusion of clinical texts as additional input into multimodal models, not always significantly improve the performance of medical decision-support systems? Explainable AI (XAI) might provide the answer. We examine which information in text and structured data influences the performance of models in the context of multimodal decision support for biomedical tasks. Using data from an intensive care unit and targeting a mortality prediction task, we compare information that has been considered relevant by XAI methods to the opinion of a physician.
[ "Ravich", "ran, Ajay Madhavan", "Grune, Julianna", "Feldhus, Nils", "Burchardt, Aljoscha", "Roller, Rol", "", "M{\\\"o}ller, Sebastian" ]
{XAI} for Better Exploitation of Text in Medical Decision Support
bionlp-1.41
Poster
2306.01668v1
https://aclanthology.org/2024.bionlp-1.42.bib
@inproceedings{cabrera-lozoya-etal-2024-optimizing, title = "Optimizing Multimodal Large Language Models for Detection of Alcohol Advertisements via Adaptive Prompting", author = "Cabrera Lozoya, Daniel and Liu, Jiahe and D{'}Alfonso, Simon and Conway, Mike", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.42", pages = "514--525", abstract = "Adolescents exposed to advertisements promoting addictive substances exhibit a higher likelihood of subsequent substance use. The predominant source for youth exposure to such advertisements is through online content accessed via smartphones. Detecting these advertisements is crucial for establishing and maintaining a safer online environment for young people. In our study, we utilized Multimodal Large Language Models (MLLMs) to identify addictive substance advertisements in digital media. The performance of MLLMs depends on the quality of the prompt used to instruct the model. To optimize our prompts, an adaptive prompt engineering approach was implemented, leveraging a genetic algorithm to refine and enhance the prompts. To evaluate the model{'}s performance, we augmented the RICO dataset, consisting of Android user interface screenshots, by superimposing alcohol ads onto them. Our results indicate that the MLLM can detect advertisements promoting alcohol with a 0.94 accuracy and a 0.94 F1 score.", }
Adolescents exposed to advertisements promoting addictive substances exhibit a higher likelihood of subsequent substance use. The predominant source for youth exposure to such advertisements is through online content accessed via smartphones. Detecting these advertisements is crucial for establishing and maintaining a safer online environment for young people. In our study, we utilized Multimodal Large Language Models (MLLMs) to identify addictive substance advertisements in digital media. The performance of MLLMs depends on the quality of the prompt used to instruct the model. To optimize our prompts, an adaptive prompt engineering approach was implemented, leveraging a genetic algorithm to refine and enhance the prompts. To evaluate the model{'}s performance, we augmented the RICO dataset, consisting of Android user interface screenshots, by superimposing alcohol ads onto them. Our results indicate that the MLLM can detect advertisements promoting alcohol with a 0.94 accuracy and a 0.94 F1 score.
[ "Cabrera Lozoya, Daniel", "Liu, Jiahe", "D{'}Alfonso, Simon", "Conway, Mike" ]
Optimizing Multimodal Large Language Models for Detection of Alcohol Advertisements via Adaptive Prompting
bionlp-1.42
Poster
2304.04187v3
https://aclanthology.org/2024.bionlp-1.43.bib
@inproceedings{holgate-etal-2024-extracting, title = "Extracting Epilepsy Patient Data with Llama 2", author = "Holgate, Ben and Fang, Shichao and Shek, Anthony and McWilliam, Matthew and Viana, Pedro and Winston, Joel S. and Teo, James T. and Richardson, Mark P.", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.43", pages = "526--535", abstract = "We fill a gap in scholarship by applying a generative Large Language Model (LLM) to extract information from clinical free text about the frequency of seizures experienced by people with epilepsy. Seizure frequency is difficult to determine across time from unstructured doctors{'} and nurses{'} reports of outpatients{'} visits that are stored in Electronic Health Records (EHRs) in the United Kingdom{'}s National Health Service (NHS). We employ Meta{'}s Llama 2 to mine the EHRs of people with epilepsy and determine, where possible, a person{'}s seizure frequency at a given point in time. The results demonstrate that the new, powerful generative LLMs may improve outcomes for clinical NLP research in epilepsy and other areas.", }
We fill a gap in scholarship by applying a generative Large Language Model (LLM) to extract information from clinical free text about the frequency of seizures experienced by people with epilepsy. Seizure frequency is difficult to determine across time from unstructured doctors{'} and nurses{'} reports of outpatients{'} visits that are stored in Electronic Health Records (EHRs) in the United Kingdom{'}s National Health Service (NHS). We employ Meta{'}s Llama 2 to mine the EHRs of people with epilepsy and determine, where possible, a person{'}s seizure frequency at a given point in time. The results demonstrate that the new, powerful generative LLMs may improve outcomes for clinical NLP research in epilepsy and other areas.
[ "Holgate, Ben", "Fang, Shichao", "Shek, Anthony", "McWilliam, Matthew", "Viana, Pedro", "Winston, Joel S.", "Teo, James T.", "Richardson, Mark P." ]
Extracting Epilepsy Patient Data with Llama 2
bionlp-1.43
Poster
2311.01280v1
https://aclanthology.org/2024.bionlp-1.44.bib
@inproceedings{basaragin-etal-2024-know, title = "How do you know that? Teaching Generative Language Models to Reference Answers to Biomedical Questions", author = "Ba{\v{s}}aragin, Bojana and Ljaji{\'c}, Adela and Medvecki, Darija and Cassano, Lorenzo and Ko{\v{s}}prdi{\'c}, Milo{\v{s}} and Milo{\v{s}}evi{\'c}, Nikola", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.44", pages = "536--547", abstract = "Large language models (LLMs) have recently become the leading source of answers for users{'} questions online. Despite their ability to offer eloquent answers, their accuracy and reliability can pose a significant challenge. This is especially true for sensitive domains such as biomedicine, where there is a higher need for factually correct answers. This paper introduces a biomedical retrieval-augmented generation (RAG) system designed to enhance the reliability of generated responses. The system is based on a fine-tuned LLM for the referenced question-answering, where retrieved relevant abstracts from PubMed are passed to LLM{'}s context as input through a prompt. Its output is an answer based on PubMed abstracts, where each statement is referenced accordingly, allowing the users to verify the answer. Our retrieval system achieves an absolute improvement of 23{\%} compared to the PubMed search engine. Based on the manual evaluation on a small sample, our fine-tuned LLM component achieves comparable results to GPT-4 Turbo in referencing relevant abstracts. We make the dataset used to fine-tune the models and the fine-tuned models based on Mistral-7B-instruct-v0.1 and v0.2 publicly available.", }
Large language models (LLMs) have recently become the leading source of answers for users{'} questions online. Despite their ability to offer eloquent answers, their accuracy and reliability can pose a significant challenge. This is especially true for sensitive domains such as biomedicine, where there is a higher need for factually correct answers. This paper introduces a biomedical retrieval-augmented generation (RAG) system designed to enhance the reliability of generated responses. The system is based on a fine-tuned LLM for the referenced question-answering, where retrieved relevant abstracts from PubMed are passed to LLM{'}s context as input through a prompt. Its output is an answer based on PubMed abstracts, where each statement is referenced accordingly, allowing the users to verify the answer. Our retrieval system achieves an absolute improvement of 23{\%} compared to the PubMed search engine. Based on the manual evaluation on a small sample, our fine-tuned LLM component achieves comparable results to GPT-4 Turbo in referencing relevant abstracts. We make the dataset used to fine-tune the models and the fine-tuned models based on Mistral-7B-instruct-v0.1 and v0.2 publicly available.
[ "Ba{\\v{s}}aragin, Bojana", "Ljaji{\\'c}, Adela", "Medvecki, Darija", "Cassano, Lorenzo", "Ko{\\v{s}}prdi{\\'c}, Milo{\\v{s}}", "Milo{\\v{s}}evi{\\'c}, Nikola" ]
How do you know that? Teaching Generative Language Models to Reference Answers to Biomedical Questions
bionlp-1.44
Poster
2407.05015v1
https://aclanthology.org/2024.bionlp-1.45.bib
@inproceedings{williamson-etal-2024-low, title = "Low Resource {ICD} Coding of Hospital Discharge Summaries", author = "Williamson, Ashton and de Hilster, David and Meyers, Amnon and Hubig, Nina and Apon, Amy", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.45", pages = "548--558", abstract = "Medical coding is the process by which standardized medical codes are assigned to patient health records. This is a complex and challenging task that typically requires an expert human coder to review health records and assign codes from a classification system based on a standard set of rules. Since health records typically consist of a large proportion of free-text documents, this problem has traditionally been approached as a natural language processing (NLP) task. While machine learning-based methods have seen recent popularity on this task, they tend to struggle with codes that are assigned less frequently, for which little or no training data exists. In this work we utilize the open-source NLP programming language, NLP++, to design and build an automated system to assign International Classification of Diseases (ICD) codes to discharge summaries that functions in the absence of labeled training data. We evaluate our system using the MIMIC-III dataset and find that for codes with little training data, our approach achieves competitive performance compared to state-of-the-art machine learning approaches.", }
Medical coding is the process by which standardized medical codes are assigned to patient health records. This is a complex and challenging task that typically requires an expert human coder to review health records and assign codes from a classification system based on a standard set of rules. Since health records typically consist of a large proportion of free-text documents, this problem has traditionally been approached as a natural language processing (NLP) task. While machine learning-based methods have seen recent popularity on this task, they tend to struggle with codes that are assigned less frequently, for which little or no training data exists. In this work we utilize the open-source NLP programming language, NLP++, to design and build an automated system to assign International Classification of Diseases (ICD) codes to discharge summaries that functions in the absence of labeled training data. We evaluate our system using the MIMIC-III dataset and find that for codes with little training data, our approach achieves competitive performance compared to state-of-the-art machine learning approaches.
[ "Williamson, Ashton", "de Hilster, David", "Meyers, Amnon", "Hubig, Nina", "Apon, Amy" ]
Low Resource {ICD} Coding of Hospital Discharge Summaries
bionlp-1.45
Poster
2405.11622v2
https://aclanthology.org/2024.bionlp-1.46.bib
@inproceedings{maschhur-etal-2024-towards, title = "Towards {ML}-supported Triage Prediction in Real-World Emergency Room Scenarios", author = "Maschhur, Faraz and Netter, Klaus and Schmeier, Sven and Ostermann, Katrin and Palunis, Rimantas and Strapatsas, Tobias and Roller, Roland", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.46", pages = "559--569", abstract = "In emergency wards, patients are prioritized by clinical staff according to the urgency of their medical condition. This can be achieved by categorizing patients into different labels of urgency ranging from immediate to not urgent. However, in order to train machine learning models offering support in this regard, there is more than approaching this as a multi-class problem. This work explores the challenges and obstacles of automatic triage using anonymized real-world multi-modal ambulance data in Germany.", }
In emergency wards, patients are prioritized by clinical staff according to the urgency of their medical condition. This can be achieved by categorizing patients into different labels of urgency ranging from immediate to not urgent. However, in order to train machine learning models offering support in this regard, there is more than approaching this as a multi-class problem. This work explores the challenges and obstacles of automatic triage using anonymized real-world multi-modal ambulance data in Germany.
[ "Maschhur, Faraz", "Netter, Klaus", "Schmeier, Sven", "Ostermann, Katrin", "Palunis, Rimantas", "Strapatsas, Tobias", "Roller, Rol", "" ]
Towards {ML}-supported Triage Prediction in Real-World Emergency Room Scenarios
bionlp-1.46
Poster
2403.07038v1
https://aclanthology.org/2024.bionlp-1.47.bib
@inproceedings{frei-kramer-2024-creating, title = "Creating Ontology-annotated Corpora from {W}ikipedia for Medical Named-entity Recognition", author = "Frei, Johann and Kramer, Frank", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.47", pages = "570--579", abstract = "Acquiring annotated corpora for medical NLP is challenging due to legal and privacy constraints and costly annotation efforts, and using annotated public datasets may do not align well to the desired target application in terms of annotation style or language. We investigate the approach of utilizing Wikipedia and WikiData jointly to acquire an unsupervised annotated corpus for named-entity recognition (NER). By controlling the annotation ruleset through WikiData{'}s ontology, we extract custom-defined annotations and dynamically impute weak annotations by an adaptive loss scaling. Our validation on German medication detection datasets yields competitive results. The entire pipeline only relies on open models and data resources, enabling reproducibility and open sharing of models and corpora. All relevant assets are shared on GitHub.", }
Acquiring annotated corpora for medical NLP is challenging due to legal and privacy constraints and costly annotation efforts, and using annotated public datasets may do not align well to the desired target application in terms of annotation style or language. We investigate the approach of utilizing Wikipedia and WikiData jointly to acquire an unsupervised annotated corpus for named-entity recognition (NER). By controlling the annotation ruleset through WikiData{'}s ontology, we extract custom-defined annotations and dynamically impute weak annotations by an adaptive loss scaling. Our validation on German medication detection datasets yields competitive results. The entire pipeline only relies on open models and data resources, enabling reproducibility and open sharing of models and corpora. All relevant assets are shared on GitHub.
[ "Frei, Johann", "Kramer, Frank" ]
Creating Ontology-annotated Corpora from {W}ikipedia for Medical Named-entity Recognition
bionlp-1.47
Poster
2212.07429v1
https://aclanthology.org/2024.bionlp-1.48.bib
@inproceedings{lanz-pecina-2024-paragraph, title = "Paragraph Retrieval for Enhanced Question Answering in Clinical Documents", author = "Lanz, Vojtech and Pecina, Pavel", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.48", pages = "580--590", abstract = "Healthcare professionals often manually extract information from large clinical documents to address patient-related questions. The use of Natural Language Processing (NLP) techniques, particularly Question Answering (QA) models, is a promising direction for improving the efficiency of this process. However, document-level QA from large documents is often impractical or even infeasible (for model training and inference). In this work, we solve the document-level QA from clinical reports in a two-step approach: first, the entire report is split into segments and for a given question the most relevant segment is predicted by a NLP model; second, a QA model is applied to the question and the retrieved segment as context. We investigate the effectiveness of heading-based and naive paragraph segmentation approaches for various paragraph lengths on two subsets of the emrQA dataset. Our experiments reveal that an average paragraph length used as a parameter for the segmentation has no significant effect on performance during the whole document-level QA process. That means experiments focusing on segmentation into shorter paragraphs perform similarly to those focusing on entire unsegmented reports. Surprisingly, naive uniform segmentation is sufficient even though it is not based on prior knowledge of the clinical document{'}s characteristics.", }
Healthcare professionals often manually extract information from large clinical documents to address patient-related questions. The use of Natural Language Processing (NLP) techniques, particularly Question Answering (QA) models, is a promising direction for improving the efficiency of this process. However, document-level QA from large documents is often impractical or even infeasible (for model training and inference). In this work, we solve the document-level QA from clinical reports in a two-step approach: first, the entire report is split into segments and for a given question the most relevant segment is predicted by a NLP model; second, a QA model is applied to the question and the retrieved segment as context. We investigate the effectiveness of heading-based and naive paragraph segmentation approaches for various paragraph lengths on two subsets of the emrQA dataset. Our experiments reveal that an average paragraph length used as a parameter for the segmentation has no significant effect on performance during the whole document-level QA process. That means experiments focusing on segmentation into shorter paragraphs perform similarly to those focusing on entire unsegmented reports. Surprisingly, naive uniform segmentation is sufficient even though it is not based on prior knowledge of the clinical document{'}s characteristics.
[ "Lanz, Vojtech", "Pecina, Pavel" ]
Paragraph Retrieval for Enhanced Question Answering in Clinical Documents
bionlp-1.48
Poster
1810.00494v1