bibtex_url
stringlengths 41
50
| bibtext
stringlengths 693
2.88k
| abstract
stringlengths 0
2k
| authors
listlengths 1
45
| title
stringlengths 21
206
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 9
12
⌀ |
---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.knowllm-1.15.bib
|
@inproceedings{chirkova-etal-2024-retrieval,
title = "Retrieval-augmented generation in multilingual settings",
author = "Chirkova, Nadezhda and
Rau, David and
D{\'e}jean, Herv{\'e} and
Formal, Thibault and
Clinchant, St{\'e}phane and
Nikoulina, Vassilina",
editor = "Li, Sha and
Li, Manling and
Zhang, Michael JQ and
Choi, Eunsol and
Geva, Mor and
Hase, Peter and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.knowllm-1.15",
pages = "177--188",
abstract = "Retrieval-augmented generation (RAG) has recently emerged as a promising solution for incorporating up-to-date or domain-specific knowledge into large language models (LLMs) and improving LLM factuality, but is predominantly studied in English-only settings. In this work, we consider RAG in the multilingual setting (mRAG), i.e. with user queries and the datastore in 13 languages, and investigate which components and with which adjustments are needed to build a well-performing mRAG pipeline, that can be used as a strong baseline in future works. Our findings highlight that despite the availability of high-quality off-the-shelf multilingual retrievers and generators, task-specific prompt engineering is needed to enable generation in user languages. Moreover, current evaluation metrics need adjustments for multilingual setting, to account for variations in spelling named entities. The main limitations to be addressed in future works include frequent code-switching in non-Latin alphabet languages, occasional fluency errors, wrong reading of the provided documents, or irrelevant retrieval. We release the code for the resulting mRAG baseline pipeline at https://github.com/naver/bergen, Documentation: https://github.com/naver/bergen/blob/main/documentations/multilingual.md.",
}
|
Retrieval-augmented generation (RAG) has recently emerged as a promising solution for incorporating up-to-date or domain-specific knowledge into large language models (LLMs) and improving LLM factuality, but is predominantly studied in English-only settings. In this work, we consider RAG in the multilingual setting (mRAG), i.e. with user queries and the datastore in 13 languages, and investigate which components and with which adjustments are needed to build a well-performing mRAG pipeline, that can be used as a strong baseline in future works. Our findings highlight that despite the availability of high-quality off-the-shelf multilingual retrievers and generators, task-specific prompt engineering is needed to enable generation in user languages. Moreover, current evaluation metrics need adjustments for multilingual setting, to account for variations in spelling named entities. The main limitations to be addressed in future works include frequent code-switching in non-Latin alphabet languages, occasional fluency errors, wrong reading of the provided documents, or irrelevant retrieval. We release the code for the resulting mRAG baseline pipeline at https://github.com/naver/bergen, Documentation: https://github.com/naver/bergen/blob/main/documentations/multilingual.md.
|
[
"Chirkova, Nadezhda",
"Rau, David",
"D{\\'e}jean, Herv{\\'e}",
"Formal, Thibault",
"Clinchant, St{\\'e}phane",
"Nikoulina, Vassilina"
] |
Retrieval-augmented generation in multilingual settings
|
knowllm-1.15
|
Poster
|
2401.01854v4
|
https://aclanthology.org/2024.knowllm-1.16.bib
|
@inproceedings{buhnila-etal-2024-retrieve,
title = "Retrieve, Generate, Evaluate: A Case Study for Medical Paraphrases Generation with Small Language Models",
author = "Buhnila, Ioana and
Sinha, Aman and
Constant, Mathieu",
editor = "Li, Sha and
Li, Manling and
Zhang, Michael JQ and
Choi, Eunsol and
Geva, Mor and
Hase, Peter and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.knowllm-1.16",
pages = "189--203",
abstract = "Recent surge in the accessibility of large language models (LLMs) to the general population can lead to untrackable use of such models for medical-related recommendations. Language generation via LLMs models has two key problems: firstly, they are prone to hallucination and therefore, for any medical purpose they require scientific and factual grounding; secondly, LLMs pose tremendous challenge to computational resources due to their gigantic model size. In this work, we introduce pRAGe, a Pipeline for Retrieval Augmented Generation and Evaluation of medical paraphrases generation using Small Language Models (SLM). We study the effectiveness of SLMs and the impact of external knowledge base for medical paraphrase generation in French.",
}
|
Recent surge in the accessibility of large language models (LLMs) to the general population can lead to untrackable use of such models for medical-related recommendations. Language generation via LLMs models has two key problems: firstly, they are prone to hallucination and therefore, for any medical purpose they require scientific and factual grounding; secondly, LLMs pose tremendous challenge to computational resources due to their gigantic model size. In this work, we introduce pRAGe, a Pipeline for Retrieval Augmented Generation and Evaluation of medical paraphrases generation using Small Language Models (SLM). We study the effectiveness of SLMs and the impact of external knowledge base for medical paraphrase generation in French.
|
[
"Buhnila, Ioana",
"Sinha, Aman",
"Constant, Mathieu"
] |
Retrieve, Generate, Evaluate: A Case Study for Medical Paraphrases Generation with Small Language Models
|
knowllm-1.16
|
Poster
|
2407.16565v1
|
https://aclanthology.org/2024.langmol-1.1.bib
|
@inproceedings{edwards-etal-2024-l,
title = "{L}+{M}-24: Building a Dataset for {L}anguage+{M}olecules @ {ACL} 2024",
author = "Edwards, Carl and
Wang, Qingyun and
Zhao, Lawrence and
Ji, Heng",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.1",
pages = "1--9",
abstract = "Language-molecule models have emerged as an exciting direction for molecular discovery and understanding. However, training these models is challenging due to the scarcity of molecule-language pair datasets. At this point, datasets have been released which are 1) small and scraped from existing databases, 2) large but noisy and constructed by performing entity linking on the scientific literature, and 3) built by converting property prediction datasets to natural language using templates. In this document, we detail the L+M-24 dataset, which has been created for the Language + Molecules Workshop shared task at ACL 2024. In particular, L+M-24 is designed to focus on three key benefits of natural language in molecule design: compositionality, functionality, and abstraction",
}
|
Language-molecule models have emerged as an exciting direction for molecular discovery and understanding. However, training these models is challenging due to the scarcity of molecule-language pair datasets. At this point, datasets have been released which are 1) small and scraped from existing databases, 2) large but noisy and constructed by performing entity linking on the scientific literature, and 3) built by converting property prediction datasets to natural language using templates. In this document, we detail the L+M-24 dataset, which has been created for the Language + Molecules Workshop shared task at ACL 2024. In particular, L+M-24 is designed to focus on three key benefits of natural language in molecule design: compositionality, functionality, and abstraction
|
[
"Edwards, Carl",
"Wang, Qingyun",
"Zhao, Lawrence",
"Ji, Heng"
] |
{L}+{M}-24: Building a Dataset for {L}anguage+{M}olecules @ {ACL} 2024
|
langmol-1.1
|
Poster
|
2403.00791v2
|
https://aclanthology.org/2024.langmol-1.2.bib
|
@inproceedings{xie-chi-2024-chemical,
title = "Could Chemical Language Models benefit from Message Passing",
author = "Xie, Jiaqing and
Chi, Ziheng",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.2",
pages = "10--20",
abstract = "Pretrained language models (LMs) showcase significant capabilities in processing molecular text, while concurrently, message passing neural networks (MPNNs) demonstrate resilience and versatility in the domain of molecular science. Despite these advancements, we find there are limited studies investigating the bidirectional interactions between molecular structures and their corresponding textual representations. Therefore, in this paper, we propose two strategies to evaluate whether an information integration can enhance the performance: contrast learning, which involves utilizing an MPNN to supervise the training of the LM, and fusion, which exploits information from both models. Our empirical analysis reveals that the integration approaches exhibit superior performance compared to baselines when applied to smaller molecular graphs, while these integration approaches do not yield performance enhancements on large scale graphs.",
}
|
Pretrained language models (LMs) showcase significant capabilities in processing molecular text, while concurrently, message passing neural networks (MPNNs) demonstrate resilience and versatility in the domain of molecular science. Despite these advancements, we find there are limited studies investigating the bidirectional interactions between molecular structures and their corresponding textual representations. Therefore, in this paper, we propose two strategies to evaluate whether an information integration can enhance the performance: contrast learning, which involves utilizing an MPNN to supervise the training of the LM, and fusion, which exploits information from both models. Our empirical analysis reveals that the integration approaches exhibit superior performance compared to baselines when applied to smaller molecular graphs, while these integration approaches do not yield performance enhancements on large scale graphs.
|
[
"Xie, Jiaqing",
"Chi, Ziheng"
] |
Could Chemical Language Models benefit from Message Passing
|
langmol-1.2
|
Poster
|
2405.08334v1
|
https://aclanthology.org/2024.langmol-1.3.bib
|
@inproceedings{gkoumas-2024-almol,
title = "{ALM}ol: Aligned Language-Molecule Translation {LLM}s through Offline Preference Contrastive Optimisation",
author = "Gkoumas, Dimitris",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.3",
pages = "21--27",
abstract = "The field of chemistry and Artificial Intelligence (AI) intersection is an area of active research that aims to accelerate scientific discovery. The integration of large language models (LLMs) with scientific modalities has shown significant promise in this endeavour. However, challenges persist in effectively addressing training efficacy and the out-of-distribution problem, particularly as existing approaches rely on larger models and datasets. In this context, we focus on machine language-molecule translation and deploy a novel training approach called contrastive preference optimisation, which avoids generating translations that are merely adequate but not perfect. To ensure generalisability and mitigate memorisation effects, we conduct experiments using only 10{\%} of the data. Our results demonstrate that our models achieve up to a 32{\%} improvement compared to counterpart models. Finally, we introduce a fine-grained, domain-agnostic evaluation method to assess hallucination in LLMs and promote responsible use.",
}
|
The field of chemistry and Artificial Intelligence (AI) intersection is an area of active research that aims to accelerate scientific discovery. The integration of large language models (LLMs) with scientific modalities has shown significant promise in this endeavour. However, challenges persist in effectively addressing training efficacy and the out-of-distribution problem, particularly as existing approaches rely on larger models and datasets. In this context, we focus on machine language-molecule translation and deploy a novel training approach called contrastive preference optimisation, which avoids generating translations that are merely adequate but not perfect. To ensure generalisability and mitigate memorisation effects, we conduct experiments using only 10{\%} of the data. Our results demonstrate that our models achieve up to a 32{\%} improvement compared to counterpart models. Finally, we introduce a fine-grained, domain-agnostic evaluation method to assess hallucination in LLMs and promote responsible use.
|
[
"Gkoumas, Dimitris"
] |
{ALM}ol: Aligned Language-Molecule Translation {LLM}s through Offline Preference Contrastive Optimisation
|
langmol-1.3
|
Poster
|
2405.08619v3
|
https://aclanthology.org/2024.langmol-1.4.bib
|
@inproceedings{cha-lee-2024-evaluating,
title = "Evaluating Extrapolation Ability of Large Language Model in Chemical Domain",
author = "Cha, Taehun and
Lee, Donghun",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.4",
pages = "28--33",
abstract = "Solving a problem outside the training space, i.e. extrapolation, has been a long problem in the machine learning community. The current success of large language models demonstrates the LLM{'}s extrapolation ability to several unseen tasks. In line with these works, we evaluate the LLM{''}s extrapolation ability in the chemical domain. We construct a data set measuring the material properties of epoxy polymers depending on various raw materials and curing processes. LLM should predict the material property when novel raw material is introduced utilizing its chemical knowledge. Through experiments, LLM tends to choose the right direction of adjustment but fails to determine the exact degree, resulting in poor MAE on some properties. But LLM can successfully adjust the degree with only a one-shot example. The results show that LLM can extrapolate to new unseen material utilizing its chemical knowledge learned through massive pre-training.",
}
|
Solving a problem outside the training space, i.e. extrapolation, has been a long problem in the machine learning community. The current success of large language models demonstrates the LLM{'}s extrapolation ability to several unseen tasks. In line with these works, we evaluate the LLM{''}s extrapolation ability in the chemical domain. We construct a data set measuring the material properties of epoxy polymers depending on various raw materials and curing processes. LLM should predict the material property when novel raw material is introduced utilizing its chemical knowledge. Through experiments, LLM tends to choose the right direction of adjustment but fails to determine the exact degree, resulting in poor MAE on some properties. But LLM can successfully adjust the degree with only a one-shot example. The results show that LLM can extrapolate to new unseen material utilizing its chemical knowledge learned through massive pre-training.
|
[
"Cha, Taehun",
"Lee, Donghun"
] |
Evaluating Extrapolation Ability of Large Language Model in Chemical Domain
|
langmol-1.4
|
Poster
|
2312.14670v1
|
https://aclanthology.org/2024.langmol-1.5.bib
|
@inproceedings{zeinalipour-etal-2024-design,
title = "Design Proteins Using Large Language Models: Enhancements and Comparative Analyses",
author = "Zeinalipour, Kamyar and
Jamshidi, Neda and
Bianchini, Monica and
Maggini, Marco and
Gori, Marco",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.5",
pages = "34--47",
abstract = "Pre-trained LLMs have demonstrated substantial capabilities across a range of conventional natural language processing (NLP) tasks, such as summarization and entity recognition. In this paper, we explore the application of LLMs in the generation of high-quality protein sequences. Specifically, we adopt a suite of pre-trained LLMs, including Mistral-7B, Llama-2-7B, Llama-3-8B, and gemma-7B, to produce valid protein sequences. All of these models are publicly available (https://github.com/KamyarZeinalipour/protein-design-LLMs).Unlike previous work in this field, our approach utilizes a relatively small dataset comprising 42,000 distinct human protein sequences. We retrain these models to process protein-related data, ensuring the generation of biologically feasible protein structures. Our findings demonstrate that even with limited data, the adapted models exhibit efficiency comparable to established protein-focused models such as ProGen varieties, ProtGPT2, and ProLLaMA, which were trained on millions of protein sequences. To validate and quantify the performance of our models, we conduct comparative analyses employing standard metrics such as pLDDT, RMSD, TM-score, and REU. Furthermore, we commit to making the trained versions of all four models publicly available, fostering greater transparency and collaboration in the field of computational biology.",
}
|
Pre-trained LLMs have demonstrated substantial capabilities across a range of conventional natural language processing (NLP) tasks, such as summarization and entity recognition. In this paper, we explore the application of LLMs in the generation of high-quality protein sequences. Specifically, we adopt a suite of pre-trained LLMs, including Mistral-7B, Llama-2-7B, Llama-3-8B, and gemma-7B, to produce valid protein sequences. All of these models are publicly available (https://github.com/KamyarZeinalipour/protein-design-LLMs).Unlike previous work in this field, our approach utilizes a relatively small dataset comprising 42,000 distinct human protein sequences. We retrain these models to process protein-related data, ensuring the generation of biologically feasible protein structures. Our findings demonstrate that even with limited data, the adapted models exhibit efficiency comparable to established protein-focused models such as ProGen varieties, ProtGPT2, and ProLLaMA, which were trained on millions of protein sequences. To validate and quantify the performance of our models, we conduct comparative analyses employing standard metrics such as pLDDT, RMSD, TM-score, and REU. Furthermore, we commit to making the trained versions of all four models publicly available, fostering greater transparency and collaboration in the field of computational biology.
|
[
"Zeinalipour, Kamyar",
"Jamshidi, Neda",
"Bianchini, Monica",
"Maggini, Marco",
"Gori, Marco"
] |
Design Proteins Using Large Language Models: Enhancements and Comparative Analyses
|
langmol-1.5
|
Poster
|
2408.06396v1
|
https://aclanthology.org/2024.langmol-1.6.bib
|
@inproceedings{pei-etal-2024-enhanced,
title = "Enhanced {B}io{T}5+ for Molecule-Text Translation: A Three-Stage Approach with Data Distillation, Diverse Training, and Voting Ensemble",
author = "Pei, Qizhi and
Wu, Lijun and
Gao, Kaiyuan and
Zhu, Jinhua and
Yan, Rui",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.6",
pages = "48--54",
abstract = "This paper presents our enhanced BioT5+ method for the Language + Molecules shared task at the ACL 2024 Workshop. The task involves {``}translating{''} between molecules and natural language, including molecule captioning and text-based molecule generation using the \textit{L+M-24} dataset. Our method consists of three stages. In the first stage, we distill data from various models. In the second stage, combined with \textit{extra} version of the provided dataset, we train diverse models for subsequent voting ensemble.We also adopt Transductive Ensemble Learning (TEL) to enhance these base models. Lastly, all models are integrated using a voting ensemble method. Experimental results demonstrate that BioT5+ achieves superior performance on \textit{L+M-24} dataset. On the final leaderboard, our method (team name: \textbf{qizhipei}) ranks \textbf{first} in the text-based molecule generation task and \textbf{second} in the molecule captioning task, highlighting its efficacy and robustness in translating between molecules and natural language. The pre-trained BioT5+ models are available at \url{https://github.com/QizhiPei/BioT5}.",
}
|
This paper presents our enhanced BioT5+ method for the Language + Molecules shared task at the ACL 2024 Workshop. The task involves {``}translating{''} between molecules and natural language, including molecule captioning and text-based molecule generation using the \textit{L+M-24} dataset. Our method consists of three stages. In the first stage, we distill data from various models. In the second stage, combined with \textit{extra} version of the provided dataset, we train diverse models for subsequent voting ensemble.We also adopt Transductive Ensemble Learning (TEL) to enhance these base models. Lastly, all models are integrated using a voting ensemble method. Experimental results demonstrate that BioT5+ achieves superior performance on \textit{L+M-24} dataset. On the final leaderboard, our method (team name: \textbf{qizhipei}) ranks \textbf{first} in the text-based molecule generation task and \textbf{second} in the molecule captioning task, highlighting its efficacy and robustness in translating between molecules and natural language. The pre-trained BioT5+ models are available at \url{https://github.com/QizhiPei/BioT5}.
|
[
"Pei, Qizhi",
"Wu, Lijun",
"Gao, Kaiyuan",
"Zhu, Jinhua",
"Yan, Rui"
] |
Enhanced {B}io{T}5+ for Molecule-Text Translation: A Three-Stage Approach with Data Distillation, Diverse Training, and Voting Ensemble
|
langmol-1.6
|
Poster
|
2402.17810v2
|
https://aclanthology.org/2024.langmol-1.7.bib
|
@inproceedings{sun-etal-2024-chatmol,
title = "{C}hat{M}ol Copilot: An Agent for Molecular Modeling and Computation Powered by {LLM}s",
author = "Sun, Jinyuan and
Li, Auston and
Deng, Yifan and
Li, Jiabo",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.7",
pages = "55--65",
abstract = "Large Language Models (LLMs) like ChatGPT excel at diverse tasks when given explicit instructions, yet they often struggle with specialized domains such as molecular science, lacking in-depth reasoning and sophisticated planning capabilities. To address these limitations, we introduce ChatMol Copilot, a chatbot-like agent specifically engineered for protein design and small molecule computations. ChatMol Copilot employs a multi-level abstraction framework to expand the LLM{`}s capability. At the basic level, it integrates external computational tools through function calls, thus offloading complex tasks and enabling a focus on strategic decision-making. The second level is data abstraction. Large data sets (such as a large number of molecules created by a generative model) are stored in Redis cache, and the redis keys are referenced by LLMs for data sources involved in computation. The third level of abstraction allows the LLM to orchestrate these tools, either directly or via dynamically generated Python executables. Our evaluations demonstrate that ChatMol Copilot can adeptly manage molecular modeling tasks, effectively utilizing a variety of tools as directed. By simplifying access to sophisticated molecular modeling resources, ChatMol Copilot stands to significantly accelerate drug discovery and biotechnological innovation, empowering biochemists with advanced, user-friendly AI capabilities. The open-sourced code is available at https://github.com/ChatMol/ChatMol",
}
|
Large Language Models (LLMs) like ChatGPT excel at diverse tasks when given explicit instructions, yet they often struggle with specialized domains such as molecular science, lacking in-depth reasoning and sophisticated planning capabilities. To address these limitations, we introduce ChatMol Copilot, a chatbot-like agent specifically engineered for protein design and small molecule computations. ChatMol Copilot employs a multi-level abstraction framework to expand the LLM{`}s capability. At the basic level, it integrates external computational tools through function calls, thus offloading complex tasks and enabling a focus on strategic decision-making. The second level is data abstraction. Large data sets (such as a large number of molecules created by a generative model) are stored in Redis cache, and the redis keys are referenced by LLMs for data sources involved in computation. The third level of abstraction allows the LLM to orchestrate these tools, either directly or via dynamically generated Python executables. Our evaluations demonstrate that ChatMol Copilot can adeptly manage molecular modeling tasks, effectively utilizing a variety of tools as directed. By simplifying access to sophisticated molecular modeling resources, ChatMol Copilot stands to significantly accelerate drug discovery and biotechnological innovation, empowering biochemists with advanced, user-friendly AI capabilities. The open-sourced code is available at https://github.com/ChatMol/ChatMol
|
[
"Sun, Jinyuan",
"Li, Auston",
"Deng, Yifan",
"Li, Jiabo"
] |
{C}hat{M}ol Copilot: An Agent for Molecular Modeling and Computation Powered by {LLM}s
|
langmol-1.7
|
Poster
|
2306.11976v1
|
https://aclanthology.org/2024.langmol-1.8.bib
|
@inproceedings{xiong-etal-2024-scimind,
title = "{S}ci{M}ind: A Multimodal Mixture-of-Experts Model for Advancing Pharmaceutical Sciences",
author = "Xiong, Zhaoping and
Fang, Xintao and
Chu, Haotian and
Wan, Xiaozhe and
Liu, Liwei and
Li, Yameng and
Xiang, Wenkai and
Zheng, Mingyue",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.8",
pages = "66--73",
abstract = "Large language models (LLMs) have made substantial strides, but their use in reliably tackling issues within specialized domains, particularly in interdisciplinary areas like pharmaceutical sciences, is hindered by data heterogeneity, knowledge complexity, unique objectives, and a spectrum of constraint conditions. In this area, diverse modalities such as nucleic acids, proteins, molecular structures, and natural language are often involved. We designed a specialized token set and introduced a new Mixture-of-Experts (MoEs) pre-training and fine-tuning strategy to unify these modalities in one model. With this strategy, we{'}ve created a multi-modal mixture-of-experts foundational model for pharmaceutical sciences, named SciMind. This model has undergone extensive pre-training on publicly accessible datasets including nucleic acid sequences, protein sequences, molecular structure strings, and biomedical texts, and delivers good performance on biomedical text comprehension, promoter prediction, protein function prediction, molecular description, and molecular generation.",
}
|
Large language models (LLMs) have made substantial strides, but their use in reliably tackling issues within specialized domains, particularly in interdisciplinary areas like pharmaceutical sciences, is hindered by data heterogeneity, knowledge complexity, unique objectives, and a spectrum of constraint conditions. In this area, diverse modalities such as nucleic acids, proteins, molecular structures, and natural language are often involved. We designed a specialized token set and introduced a new Mixture-of-Experts (MoEs) pre-training and fine-tuning strategy to unify these modalities in one model. With this strategy, we{'}ve created a multi-modal mixture-of-experts foundational model for pharmaceutical sciences, named SciMind. This model has undergone extensive pre-training on publicly accessible datasets including nucleic acid sequences, protein sequences, molecular structure strings, and biomedical texts, and delivers good performance on biomedical text comprehension, promoter prediction, protein function prediction, molecular description, and molecular generation.
|
[
"Xiong, Zhaoping",
"Fang, Xintao",
"Chu, Haotian",
"Wan, Xiaozhe",
"Liu, Liwei",
"Li, Yameng",
"Xiang, Wenkai",
"Zheng, Mingyue"
] |
{S}ci{M}ind: A Multimodal Mixture-of-Experts Model for Advancing Pharmaceutical Sciences
|
langmol-1.8
|
Poster
|
1809.02069v1
|
https://aclanthology.org/2024.langmol-1.9.bib
|
@inproceedings{m-bran-etal-2024-knowledge,
title = "Knowledge Graph Extraction from Total Synthesis Documents",
author = "M Bran, Andres and
Jon{\v{c}}ev, Zlatko and
Schwaller, Philippe",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.9",
pages = "74--84",
abstract = "Knowledge graphs (KGs) have emerged as a powerful tool for organizing and integrating complex information, making it a suitable format for scientific knowledge. However, translating scientific knowledge into KGs is challenging as a wide variety of styles and elements to present data and ideas is used. Although efforts for KG extraction (KGE) from scientific documents exist, evaluation remains challenging and field-dependent; and existing benchmarks do not focuse on scientific information. Furthermore, establishing a general benchmark for this task is challenging as not all scientific knowledge has a ground-truth KG representation, making any benchmark prone to ambiguity. Here we propose Graph of Organic Synthesis Benchmark (GOSyBench), a benchmark for KG extraction from scientific documents in chemistry, that leverages the native KG-like structure of synthetic routes in organic chemistry. We develop KG-extraction algorithms based on LLMs (GPT-4, Claude, Mistral) and VLMs (GPT-4o), the best of which reaches 73{\%} recovery accuracy and 59{\%} precision, leaving a lot of room for improvement. We expect GOSyBench can serve as a valuable resource for evaluating and advancing KGE methods in the scientific domain, ultimately facilitating better organization, integration, and discovery of scientific knowledge.",
}
|
Knowledge graphs (KGs) have emerged as a powerful tool for organizing and integrating complex information, making it a suitable format for scientific knowledge. However, translating scientific knowledge into KGs is challenging as a wide variety of styles and elements to present data and ideas is used. Although efforts for KG extraction (KGE) from scientific documents exist, evaluation remains challenging and field-dependent; and existing benchmarks do not focuse on scientific information. Furthermore, establishing a general benchmark for this task is challenging as not all scientific knowledge has a ground-truth KG representation, making any benchmark prone to ambiguity. Here we propose Graph of Organic Synthesis Benchmark (GOSyBench), a benchmark for KG extraction from scientific documents in chemistry, that leverages the native KG-like structure of synthetic routes in organic chemistry. We develop KG-extraction algorithms based on LLMs (GPT-4, Claude, Mistral) and VLMs (GPT-4o), the best of which reaches 73{\%} recovery accuracy and 59{\%} precision, leaving a lot of room for improvement. We expect GOSyBench can serve as a valuable resource for evaluating and advancing KGE methods in the scientific domain, ultimately facilitating better organization, integration, and discovery of scientific knowledge.
|
[
"M Bran, Andres",
"Jon{\\v{c}}ev, Zlatko",
"Schwaller, Philippe"
] |
Knowledge Graph Extraction from Total Synthesis Documents
|
langmol-1.9
|
Poster
|
2302.06854v1
|
https://aclanthology.org/2024.langmol-1.10.bib
|
@inproceedings{tanaka-etal-2024-nlpeople,
title = "{NLP}eople at \textit{ {L}+{M}-24} Shared Task: An Ensembled Approach for Molecule Captioning from {SMILES}",
author = "Tanaka, Shinnosuke and
Mak, Carol and
Cipcigan, Flaviu and
Barry, James and
Elkaref, Mohab and
Moses, Movina and
Kuruvanthodi, Vishnudev and
Mel, Geeth",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.10",
pages = "85--90",
abstract = "This paper presents our approach submitted to the Language + Molecules 2024 (\textit{L+M-24}) Shared Task in the Molecular Captioning track. The task involves generating captions that describe the properties of molecules that are provided in SMILES format.We propose a method for the task that decomposes the challenge of generating captions from SMILES into a classification problem,where we first predict the molecule{'}s properties. The molecules whose properties can be predicted with high accuracy show high translation metric scores in the caption generation by LLMs, while others produce low scores. Then we use the predicted properties to select the captions generated by different types of LLMs, and use that prediction as the final output. Our submission achieved an overall increase score of 15.21 on the dev set and 12.30 on the evaluation set, based on translation metrics and property metrics from the baseline.",
}
|
This paper presents our approach submitted to the Language + Molecules 2024 (\textit{L+M-24}) Shared Task in the Molecular Captioning track. The task involves generating captions that describe the properties of molecules that are provided in SMILES format.We propose a method for the task that decomposes the challenge of generating captions from SMILES into a classification problem,where we first predict the molecule{'}s properties. The molecules whose properties can be predicted with high accuracy show high translation metric scores in the caption generation by LLMs, while others produce low scores. Then we use the predicted properties to select the captions generated by different types of LLMs, and use that prediction as the final output. Our submission achieved an overall increase score of 15.21 on the dev set and 12.30 on the evaluation set, based on translation metrics and property metrics from the baseline.
|
[
"Tanaka, Shinnosuke",
"Mak, Carol",
"Cipcigan, Flaviu",
"Barry, James",
"Elkaref, Mohab",
"Moses, Movina",
"Kuruvanthodi, Vishnudev",
"Mel, Geeth"
] |
{NLP}eople at \textit{ {L}+{M}-24} Shared Task: An Ensembled Approach for Molecule Captioning from {SMILES}
|
langmol-1.10
|
Poster
|
2403.00791v2
|
https://aclanthology.org/2024.langmol-1.11.bib
|
@inproceedings{kim-wu-2024-knowlabs,
title = "Knowlab{'}s Submission to {L}+{M} Shared Task: All you need is continued pretraining of chemistry texts even for molecule captioning",
author = "Kim, Yunsoo and
Wu, Honghan",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.11",
pages = "91--96",
abstract = "This paper presents our submission to the L+M-24 shared task, focused on translating molecular structures into natural language descriptions, known as the molecule captioning task. We selected a small language model (SLM), Phi-3-mini-4k, to evaluate the impact of continued pretraining and instruction tuning for domain-specific chemical knowledge. The Phi-3 model was continued pretrained with 90M chemistry textbooks and abstracts, followed by instruction tuning on 150K question answering sets of SMILES and general chemistry knowledge. Despite the continued pretraining phase not including direct exposure to SMILES representations, it significantly enhanced the Phi-3 model{'}s performance, a 300{\%} increase for the BLEU scores, in the molecule captioning task. The code and model are released at \url{https://github.com/bluesky333/Phi3KnowChem} to facilitate research in chemical small language modeling.",
}
|
This paper presents our submission to the L+M-24 shared task, focused on translating molecular structures into natural language descriptions, known as the molecule captioning task. We selected a small language model (SLM), Phi-3-mini-4k, to evaluate the impact of continued pretraining and instruction tuning for domain-specific chemical knowledge. The Phi-3 model was continued pretrained with 90M chemistry textbooks and abstracts, followed by instruction tuning on 150K question answering sets of SMILES and general chemistry knowledge. Despite the continued pretraining phase not including direct exposure to SMILES representations, it significantly enhanced the Phi-3 model{'}s performance, a 300{\%} increase for the BLEU scores, in the molecule captioning task. The code and model are released at \url{https://github.com/bluesky333/Phi3KnowChem} to facilitate research in chemical small language modeling.
|
[
"Kim, Yunsoo",
"Wu, Honghan"
] |
Knowlab{'}s Submission to {L}+{M} Shared Task: All you need is continued pretraining of chemistry texts even for molecule captioning
|
langmol-1.11
|
Poster
|
2204.11817v3
|
https://aclanthology.org/2024.langmol-1.12.bib
|
@inproceedings{tran-etal-2024-mol2lang,
title = "{M}ol2{L}ang-{VLM}: Vision- and Text-Guided Generative Pre-trained Language Models for Advancing Molecule Captioning through Multimodal Fusion",
author = "Tran, Duong and
Pham, Nhat Truong and
Nguyen, Nguyen and
Manavalan, Balachandran",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.12",
pages = "97--102",
abstract = "This paper introduces Mol2Lang-VLM, an enhanced method for refining generative pre-trained language models for molecule captioning using multimodal features to achieve more accurate caption generation. Our approach leverages the encoder and decoder blocks of the Transformer-based architecture by introducing third sub-layers into both. Specifically, we insert sub-layers in the encoder to fuse features from SELFIES strings and molecular images, while the decoder fuses features from SMILES strings and their corresponding descriptions. Moreover, cross multi-head attention is employed instead of common multi-head attention to enable the decoder to attend to the encoder{'}s output, thereby integrating the encoded contextual information for better and more accurate caption generation. Performance evaluation on the CheBI-20 and L+M-24 benchmark datasets demonstrates Mol2Lang-VLM{'}s superiority, achieving higher accuracy and quality in caption generation compared to existing methods. Our code and pre-processed data are available at https://github.com/nhattruongpham/mol-lang-bridge/tree/mol2lang/.",
}
|
This paper introduces Mol2Lang-VLM, an enhanced method for refining generative pre-trained language models for molecule captioning using multimodal features to achieve more accurate caption generation. Our approach leverages the encoder and decoder blocks of the Transformer-based architecture by introducing third sub-layers into both. Specifically, we insert sub-layers in the encoder to fuse features from SELFIES strings and molecular images, while the decoder fuses features from SMILES strings and their corresponding descriptions. Moreover, cross multi-head attention is employed instead of common multi-head attention to enable the decoder to attend to the encoder{'}s output, thereby integrating the encoded contextual information for better and more accurate caption generation. Performance evaluation on the CheBI-20 and L+M-24 benchmark datasets demonstrates Mol2Lang-VLM{'}s superiority, achieving higher accuracy and quality in caption generation compared to existing methods. Our code and pre-processed data are available at https://github.com/nhattruongpham/mol-lang-bridge/tree/mol2lang/.
|
[
"Tran, Duong",
"Pham, Nhat Truong",
"Nguyen, Nguyen",
"Manavalan, Balach",
"ran"
] |
{M}ol2{L}ang-{VLM}: Vision- and Text-Guided Generative Pre-trained Language Models for Advancing Molecule Captioning through Multimodal Fusion
|
langmol-1.12
|
Poster
|
2010.15251v2
|
https://aclanthology.org/2024.langmol-1.13.bib
|
@inproceedings{saadat-fellay-2024-dna,
title = "{DNA} Language Model and Interpretable Graph Neural Network Identify Genes and Pathways Involved in Rare Diseases",
author = "Saadat, Ali and
Fellay, Jacques",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.13",
pages = "103--115",
abstract = "Identification of causal genes and pathways is a critical step for understanding the genetic underpinnings of rare diseases. We propose novel approaches to gene prioritization and pathway identification using DNA language model, graph neural networks, and genetic algorithm. Using HyenaDNA, a long-range genomic foundation model, we generated dynamic gene embeddings that reflect changes caused by deleterious variants. These gene embeddings were then utilized to identify candidate genes and pathways. We validated our method on a cohort of rare disease patients with partially known genetic diagnosis, demonstrating the re-identification of known causal genes and pathways and the detection of novel candidates. These findings have implications for the prevention and treatment of rare diseases by enabling targeted identification of new drug targets and therapeutic pathways.",
}
|
Identification of causal genes and pathways is a critical step for understanding the genetic underpinnings of rare diseases. We propose novel approaches to gene prioritization and pathway identification using DNA language model, graph neural networks, and genetic algorithm. Using HyenaDNA, a long-range genomic foundation model, we generated dynamic gene embeddings that reflect changes caused by deleterious variants. These gene embeddings were then utilized to identify candidate genes and pathways. We validated our method on a cohort of rare disease patients with partially known genetic diagnosis, demonstrating the re-identification of known causal genes and pathways and the detection of novel candidates. These findings have implications for the prevention and treatment of rare diseases by enabling targeted identification of new drug targets and therapeutic pathways.
|
[
"Saadat, Ali",
"Fellay, Jacques"
] |
{DNA} Language Model and Interpretable Graph Neural Network Identify Genes and Pathways Involved in Rare Diseases
|
langmol-1.13
|
Poster
|
2306.13866v2
|
https://aclanthology.org/2024.langmol-1.14.bib
|
@inproceedings{lee-lee-2024-repurformer,
title = "Repurformer: Transformers for Repurposing-Aware Molecule Generation",
author = "Lee, Changhun and
Lee, Gyumin",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.14",
pages = "116--127",
abstract = "Generating as diverse molecules as possible with desired properties is crucial for drug discovery research, which invokes many approaches based on deep generative models today. Despite recent advancements in these models, particularly in variational autoencoders (VAEs), generative adversarial networks (GANs), Transformers, and diffusion models, a significant challenge known as the sample bias problem remains. This problem occurs when generated molecules targeting the same protein tend to be structurally similar, reducing the diversity of generation. To address this, we propose leveraging multi-hop relationships among proteins and compounds. Our model, Repurformer, integrates bi-directional pretraining with Fast Fourier Transform (FFT) and low-pass filtering (LPF) to capture complex interactions and generate diverse molecules. A series of experiments on BindingDB dataset confirm that Repurformer successfully creates substitutes for anchor compounds that resemble positive compounds, increasing diversity between the anchor and generated compounds.",
}
|
Generating as diverse molecules as possible with desired properties is crucial for drug discovery research, which invokes many approaches based on deep generative models today. Despite recent advancements in these models, particularly in variational autoencoders (VAEs), generative adversarial networks (GANs), Transformers, and diffusion models, a significant challenge known as the sample bias problem remains. This problem occurs when generated molecules targeting the same protein tend to be structurally similar, reducing the diversity of generation. To address this, we propose leveraging multi-hop relationships among proteins and compounds. Our model, Repurformer, integrates bi-directional pretraining with Fast Fourier Transform (FFT) and low-pass filtering (LPF) to capture complex interactions and generate diverse molecules. A series of experiments on BindingDB dataset confirm that Repurformer successfully creates substitutes for anchor compounds that resemble positive compounds, increasing diversity between the anchor and generated compounds.
|
[
"Lee, Changhun",
"Lee, Gyumin"
] |
Repurformer: Transformers for Repurposing-Aware Molecule Generation
|
langmol-1.14
|
Poster
|
2106.03394v1
|
https://aclanthology.org/2024.langmol-1.15.bib
|
@inproceedings{nguyen-etal-2024-lang2mol,
title = "{L}ang2{M}ol-Diff: A Diffusion-Based Generative Model for Language-to-Molecule Translation Leveraging {SELFIES} Representation",
author = "Nguyen, Nguyen and
Pham, Nhat Truong and
Tran, Duong and
Manavalan, Balachandran",
editor = "Edwards, Carl and
Wang, Qingyun and
Li, Manling and
Zhao, Lawrence and
Hope, Tom and
Ji, Heng",
booktitle = "Proceedings of the 1st Workshop on Language + Molecules (L+M 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.langmol-1.15",
pages = "128--134",
abstract = "Generating de novo molecules from textual descriptions is challenging due to potential issues with molecule validity in SMILES representation and limitations of autoregressive models. This work introduces Lang2Mol-Diff, a diffusion-based language-to-molecule generative model using the SELFIES representation. Specifically, Lang2Mol-Diff leverages the strengths of two state-of-the-art molecular generative models: BioT5 and TGM-DLM. By employing BioT5 to tokenize the SELFIES representation, Lang2Mol-Diff addresses the validity issues associated with SMILES strings. Additionally, it incorporates a text diffusion mechanism from TGM-DLM to overcome the limitations of autoregressive models in this domain. To the best of our knowledge, this is the first study to leverage the diffusion mechanism for text-based de novo molecule generation using the SELFIES molecular string representation. Performance evaluation on the L+M-24 benchmark dataset shows that Lang2Mol-Diff outperforms all existing methods for molecule generation in terms of validity. Our code and pre-processed data are available at https://github.com/nhattruongpham/mol-lang-bridge/tree/lang2mol/.",
}
|
Generating de novo molecules from textual descriptions is challenging due to potential issues with molecule validity in SMILES representation and limitations of autoregressive models. This work introduces Lang2Mol-Diff, a diffusion-based language-to-molecule generative model using the SELFIES representation. Specifically, Lang2Mol-Diff leverages the strengths of two state-of-the-art molecular generative models: BioT5 and TGM-DLM. By employing BioT5 to tokenize the SELFIES representation, Lang2Mol-Diff addresses the validity issues associated with SMILES strings. Additionally, it incorporates a text diffusion mechanism from TGM-DLM to overcome the limitations of autoregressive models in this domain. To the best of our knowledge, this is the first study to leverage the diffusion mechanism for text-based de novo molecule generation using the SELFIES molecular string representation. Performance evaluation on the L+M-24 benchmark dataset shows that Lang2Mol-Diff outperforms all existing methods for molecule generation in terms of validity. Our code and pre-processed data are available at https://github.com/nhattruongpham/mol-lang-bridge/tree/lang2mol/.
|
[
"Nguyen, Nguyen",
"Pham, Nhat Truong",
"Tran, Duong",
"Manavalan, Balach",
"ran"
] |
{L}ang2{M}ol-Diff: A Diffusion-Based Generative Model for Language-to-Molecule Translation Leveraging {SELFIES} Representation
|
langmol-1.15
|
Poster
|
2211.13322v1
|
https://aclanthology.org/2024.lchange-1.1.bib
|
@inproceedings{list-van-dam-2024-invited,
title = "\textbf{Invited paper}: Computer-Assisted Language Comparison with {EDICTOR} 3",
author = "List, Johann-Mattis and
van Dam, Kellen",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.1",
pages = "1--11",
abstract = "",
}
|
[
"List, Johann-Mattis",
"van Dam, Kellen"
] |
\textbf{Invited paper}: Computer-Assisted Language Comparison with {EDICTOR} 3
|
lchange-1.1
|
Poster
|
0110012v1
|
|
https://aclanthology.org/2024.lchange-1.2.bib
|
@inproceedings{celikkol-etal-2024-exploring,
title = "Exploring Diachronic and Diatopic Changes in Dialect Continua: Tasks, Datasets and Challenges",
author = {{\c{C}}elikkol, Melis and
K{\"o}rber, Lydia and
Zhao, Wei},
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.2",
pages = "12--22",
abstract = "",
}
|
[
"{\\c{C}}elikkol, Melis",
"K{\\\"o}rber, Lydia",
"Zhao, Wei"
] |
Exploring Diachronic and Diatopic Changes in Dialect Continua: Tasks, Datasets and Challenges
|
lchange-1.2
|
Poster
|
2407.04010v1
|
|
https://aclanthology.org/2024.lchange-1.3.bib
|
@inproceedings{bruckner-etal-2024-similarity,
title = "Similarity-Based Cluster Merging for Semantic Change Modeling",
author = {Br{\"u}ckner, Christopher and
Zhang, Leixin and
Pecina, Pavel},
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.3",
pages = "23--28",
abstract = "",
}
|
[
"Br{\\\"u}ckner, Christopher",
"Zhang, Leixin",
"Pecina, Pavel"
] |
Similarity-Based Cluster Merging for Semantic Change Modeling
|
lchange-1.3
|
Poster
|
2111.11904v1
|
|
https://aclanthology.org/2024.lchange-1.4.bib
|
@inproceedings{montes-etal-2024-historical,
title = "Historical Ink: Semantic Shift Detection for 19th Century {S}panish",
author = "Montes, Tony and
Manrique-G{\'o}mez, Laura and
Manrique, Rub{\'e}n",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.4",
pages = "29--41",
abstract = "",
}
|
[
"Montes, Tony",
"Manrique-G{\\'o}mez, Laura",
"Manrique, Rub{\\'e}n"
] |
Historical Ink: Semantic Shift Detection for 19th Century {S}panish
|
lchange-1.4
|
Poster
|
2407.12852v2
|
|
https://aclanthology.org/2024.lchange-1.5.bib
|
@inproceedings{ma-etal-2024-presence,
title = "Presence or Absence: Are Unknown Word Usages in Dictionaries?",
author = "Ma, Xianghe and
Schlechtweg, Dominik and
Zhao, Wei",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.5",
pages = "42--54",
abstract = "",
}
|
[
"Ma, Xianghe",
"Schlechtweg, Dominik",
"Zhao, Wei"
] |
Presence or Absence: Are Unknown Word Usages in Dictionaries?
|
lchange-1.5
|
Poster
|
2406.00656v2
|
|
https://aclanthology.org/2024.lchange-1.6.bib
|
@inproceedings{lindhardt-overgaard-etal-2024-towards,
title = "Towards a {G}olden{H}ymns Dataset for Studying Diachronic Trends in 19th Century {D}anish Religious Hymns",
author = "Lindhardt Overgaard, Ea and
Feldkamp, Pascale and
Bizzoni, Yuri",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.6",
pages = "55--61",
abstract = "",
}
|
[
"Lindhardt Overgaard, Ea",
"Feldkamp, Pascale",
"Bizzoni, Yuri"
] |
Towards a {G}olden{H}ymns Dataset for Studying Diachronic Trends in 19th Century {D}anish Religious Hymns
|
lchange-1.6
|
Poster
|
1906.01440v1
|
|
https://aclanthology.org/2024.lchange-1.7.bib
|
@inproceedings{zhao-2024-feature,
title = "A Feature-Based Approach to Annotate the Syntax of {A}ncient {C}hinese",
author = "Zhao, Chenrong",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.7",
pages = "62--71",
abstract = "",
}
|
[
"Zhao, Chenrong"
] |
A Feature-Based Approach to Annotate the Syntax of {A}ncient {C}hinese
|
lchange-1.7
|
Poster
|
1808.03738v2
|
|
https://aclanthology.org/2024.lchange-1.8.bib
|
@inproceedings{fedorova-etal-2024-axolotl24,
title = "{AXOLOTL}{'}24 Shared Task on Multilingual Explainable Semantic Change Modeling",
author = "Fedorova, Mariia and
Mickus, Timothee and
Partanen, Niko and
Siewert, Janine and
Spaziani, Elena and
Kutuzov, Andrey",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.8",
pages = "72--91",
abstract = "",
}
|
[
"Fedorova, Mariia",
"Mickus, Timothee",
"Partanen, Niko",
"Siewert, Janine",
"Spaziani, Elena",
"Kutuzov, Andrey"
] |
{AXOLOTL}{'}24 Shared Task on Multilingual Explainable Semantic Change Modeling
|
lchange-1.8
|
Poster
|
2407.04079v1
|
|
https://aclanthology.org/2024.lchange-1.9.bib
|
@inproceedings{noble-etal-2024-improving,
title = "Improving Word Usage Graphs with Edge Induction",
author = "Noble, Bill and
Periti, Francesco and
Tahmasebi, Nina",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.9",
pages = "92--107",
abstract = "",
}
|
[
"Noble, Bill",
"Periti, Francesco",
"Tahmasebi, Nina"
] |
Improving Word Usage Graphs with Edge Induction
|
lchange-1.9
|
Poster
|
2209.13750v1
|
|
https://aclanthology.org/2024.lchange-1.10.bib
|
@inproceedings{periti-tahmasebi-2024-towards,
title = "Towards a Complete Solution to Lexical Semantic Change: an Extension to Multiple Time Periods and Diachronic Word Sense Induction",
author = "Periti, Francesco and
Tahmasebi, Nina",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.10",
pages = "108--119",
abstract = "",
}
|
[
"Periti, Francesco",
"Tahmasebi, Nina"
] |
Towards a Complete Solution to Lexical Semantic Change: an Extension to Multiple Time Periods and Diachronic Word Sense Induction
|
lchange-1.10
|
Poster
|
1811.06278v2
|
|
https://aclanthology.org/2024.lchange-1.11.bib
|
@inproceedings{dorkin-sirts-2024-tartunlp-axolotl,
title = "{T}artu{NLP} @ {AXOLOTL}-24: Leveraging Classifier Output for New Sense Detection in Lexical Semantics",
author = "Dorkin, Aleksei and
Sirts, Kairit",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.11",
pages = "120--125",
abstract = "",
}
|
[
"Dorkin, Aleksei",
"Sirts, Kairit"
] |
{T}artu{NLP} @ {AXOLOTL}-24: Leveraging Classifier Output for New Sense Detection in Lexical Semantics
|
lchange-1.11
|
Poster
|
2407.03861v1
|
|
https://aclanthology.org/2024.lchange-1.12.bib
|
@inproceedings{gao-sun-2024-etymolink,
title = "{E}tymo{L}ink: A Structured {E}nglish Etymology Dataset",
author = "Gao, Yuan and
Sun, Weiwei",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.12",
pages = "126--136",
abstract = "",
}
|
[
"Gao, Yuan",
"Sun, Weiwei"
] |
{E}tymo{L}ink: A Structured {E}nglish Etymology Dataset
|
lchange-1.12
|
Poster
|
2205.00395v2
|
|
https://aclanthology.org/2024.lchange-1.13.bib
|
@inproceedings{alfter-2024-complexity,
title = "Complexity and Indecision: A Proof-of-Concept Exploration of Lexical Complexity and Lexical Semantic Change",
author = "Alfter, David",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.13",
pages = "137--143",
abstract = "",
}
|
[
"Alfter, David"
] |
Complexity and Indecision: A Proof-of-Concept Exploration of Lexical Complexity and Lexical Semantic Change
|
lchange-1.13
|
Poster
|
2203.11476v3
|
|
https://aclanthology.org/2024.lchange-1.14.bib
|
@inproceedings{boholm-etal-2024-political,
title = "Can political dogwhistles be predicted by distributional methods for analysis of lexical semantic change?",
author = {Boholm, Max and
R{\"o}nnerstrand, Bj{\"o}rn and
Breitholtz, Ellen and
Cooper, Robin and
Lindgren, Elina and
Rettenegger, Gregor and
Sayeed, Asad},
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.14",
pages = "144--157",
abstract = "",
}
|
[
"Boholm, Max",
"R{\\\"o}nnerstr",
", Bj{\\\"o}rn",
"Breitholtz, Ellen",
"Cooper, Robin",
"Lindgren, Elina",
"Rettenegger, Gregor",
"Sayeed, Asad"
] |
Can political dogwhistles be predicted by distributional methods for analysis of lexical semantic change?
|
lchange-1.14
|
Poster
|
2305.17174v1
|
|
https://aclanthology.org/2024.lchange-1.15.bib
|
@inproceedings{lietard-etal-2024-towards,
title = "Towards an Onomasiological Study of Lexical Semantic Change Through the Induction of Concepts",
author = "Li{\'e}tard, Bastien and
Keller, Mikaela and
Denis, Pascal",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.15",
pages = "158--167",
abstract = "",
}
|
[
"Li{\\'e}tard, Bastien",
"Keller, Mikaela",
"Denis, Pascal"
] |
Towards an Onomasiological Study of Lexical Semantic Change Through the Induction of Concepts
|
lchange-1.15
|
Poster
|
2405.13529v1
|
|
https://aclanthology.org/2024.lchange-1.16.bib
|
@inproceedings{kokosinskii-etal-2024-deep,
title = "Deep-change at {AXOLOTL}-24: Orchestrating {WSD} and {WSI} Models for Semantic Change Modeling",
author = "Kokosinskii, Denis and
Kuklin, Mikhail and
Arefyev, Nikolay",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.16",
pages = "168--179",
abstract = "",
}
|
[
"Kokosinskii, Denis",
"Kuklin, Mikhail",
"Arefyev, Nikolay"
] |
Deep-change at {AXOLOTL}-24: Orchestrating {WSD} and {WSI} Models for Semantic Change Modeling
|
lchange-1.16
|
Poster
|
2408.05184v1
|
|
https://aclanthology.org/2024.lchange-1.17.bib
|
@inproceedings{he-zhao-2024-exploring,
title = "Exploring Sound Change Over Time: A Review of Computational and Human Perception",
author = "He, Siqi and
Zhao, Wei",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.17",
pages = "180--186",
abstract = "",
}
|
[
"He, Siqi",
"Zhao, Wei"
] |
Exploring Sound Change Over Time: A Review of Computational and Human Perception
|
lchange-1.17
|
Poster
|
2407.05092v1
|
|
https://aclanthology.org/2024.lchange-1.18.bib
|
@inproceedings{ren-etal-2024-shot,
title = "A Few-shot Learning Approach for Lexical Semantic Change Detection Using {GPT}-4",
author = "Ren, Zhengfei and
Caputo, Annalina and
Jones, Gareth",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Kutuzov, Andrey and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi and
Huebscher, Netta",
booktitle = "Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.lchange-1.18",
pages = "187--192",
abstract = "",
}
|
[
"Ren, Zhengfei",
"Caputo, Annalina",
"Jones, Gareth"
] |
A Few-shot Learning Approach for Lexical Semantic Change Detection Using {GPT}-4
|
lchange-1.18
|
Poster
|
2403.00226v3
|
|
https://aclanthology.org/2024.loresmt-1.1.bib
|
@inproceedings{mao-yu-2024-tuning,
title = "Tuning {LLM}s with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages",
author = "Mao, Zhuoyuan and
Yu, Yen",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.1",
pages = "1--25",
abstract = "This article introduces contrastive alignment instructions (AlignInstruct) to address two challenges in machine translation (MT) on large language models (LLMs). One is the expansion of supported languages to previously unseen ones. The second relates to the lack of data in low-resource languages. Model fine-tuning through MT instructions (MTInstruct) is a straightforward approach to the first challenge. However, MTInstruct is limited by weak cross-lingual signals inherent in the second challenge. AlignInstruct emphasizes cross-lingual supervision via a cross-lingual discriminator built using statistical word alignments. Our results based on fine-tuning the BLOOMZ models (1b1, 3b, and 7b1) in up to 24 unseen languages showed that: (1) LLMs can effectively translate unseen languages using MTInstruct; (2) AlignInstruct led to consistent improvements in translation quality across 48 translation directions involving English; (3) Discriminator-based instructions outperformed their generative counterparts as cross-lingual instructions; (4) AlignInstruct improved performance in 30 zero-shot directions.",
}
|
This article introduces contrastive alignment instructions (AlignInstruct) to address two challenges in machine translation (MT) on large language models (LLMs). One is the expansion of supported languages to previously unseen ones. The second relates to the lack of data in low-resource languages. Model fine-tuning through MT instructions (MTInstruct) is a straightforward approach to the first challenge. However, MTInstruct is limited by weak cross-lingual signals inherent in the second challenge. AlignInstruct emphasizes cross-lingual supervision via a cross-lingual discriminator built using statistical word alignments. Our results based on fine-tuning the BLOOMZ models (1b1, 3b, and 7b1) in up to 24 unseen languages showed that: (1) LLMs can effectively translate unseen languages using MTInstruct; (2) AlignInstruct led to consistent improvements in translation quality across 48 translation directions involving English; (3) Discriminator-based instructions outperformed their generative counterparts as cross-lingual instructions; (4) AlignInstruct improved performance in 30 zero-shot directions.
|
[
"Mao, Zhuoyuan",
"Yu, Yen"
] |
Tuning {LLM}s with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages
|
loresmt-1.1
|
Poster
|
2401.05811v2
|
https://aclanthology.org/2024.loresmt-1.2.bib
|
@inproceedings{mondshine-etal-2024-hesum,
title = "{H}e{S}um: a Novel Dataset for Abstractive Text Summarization in {H}ebrew",
author = "Mondshine, Itai and
Paz-Argaman, Tzuf and
Achi Mordechai, Asaf and
Tsarfaty, Reut",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.2",
pages = "26--36",
abstract = "While large language models (LLMs) excel in various natural language tasks in English, their performance in low-resource languages like Hebrew, especially for generative tasks such as abstractive summarization, remains unclear. The high morphological richness in Hebrew adds further challenges due to the ambiguity in sentence comprehension and the complexities in meaning construction. In this paper, we address this evaluation and resource gap by introducing HeSum, a novel benchmark dataset specifically designed for Hebrew abstractive text summarization. HeSum consists of 10,000 article-summary pairs sourced from Hebrew news websites written by professionals. Linguistic analysis confirms HeSum{'}s high abstractness and unique morphological challenges. We show that HeSum presents distinct difficulties even for state-of-the-art LLMs, establishing it as a valuable testbed for advancing generative language technology in Hebrew, and MRLs generative challenges in general.",
}
|
While large language models (LLMs) excel in various natural language tasks in English, their performance in low-resource languages like Hebrew, especially for generative tasks such as abstractive summarization, remains unclear. The high morphological richness in Hebrew adds further challenges due to the ambiguity in sentence comprehension and the complexities in meaning construction. In this paper, we address this evaluation and resource gap by introducing HeSum, a novel benchmark dataset specifically designed for Hebrew abstractive text summarization. HeSum consists of 10,000 article-summary pairs sourced from Hebrew news websites written by professionals. Linguistic analysis confirms HeSum{'}s high abstractness and unique morphological challenges. We show that HeSum presents distinct difficulties even for state-of-the-art LLMs, establishing it as a valuable testbed for advancing generative language technology in Hebrew, and MRLs generative challenges in general.
|
[
"Mondshine, Itai",
"Paz-Argaman, Tzuf",
"Achi Mordechai, Asaf",
"Tsarfaty, Reut"
] |
{H}e{S}um: a Novel Dataset for Abstractive Text Summarization in {H}ebrew
|
loresmt-1.2
|
Poster
|
2406.03897v2
|
https://aclanthology.org/2024.loresmt-1.3.bib
|
@inproceedings{kim-etal-2024-kpopmt,
title = "{K}pop{MT}: Translation Dataset with Terminology for Kpop Fandom",
author = "Kim, JiWoo and
Kim, Yunsu and
Bak, JinYeong",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.3",
pages = "37--43",
abstract = "While machines learn from existing corpora, humans have the unique capability to establish and accept new language systems. This makes human form unique language systems within social groups. Aligning with this, we focus on a gap remaining in addressing translation challenges within social groups, where in-group members utilize unique terminologies. We propose KpopMT dataset, which aims to fill this gap by enabling precise terminology translation, choosing Kpop fandom as an initiative for social groups given its global popularity. Expert translators provide 1k English translations for Korean posts and comments, each annotated with specific terminology within social groups{'} language systems. We evaluate existing translation systems including GPT models on KpopMT to identify their failure cases. Results show overall low scores, underscoring the challenges of reflecting group-specific terminologies and styles in translation. We make KpopMT publicly available.",
}
|
While machines learn from existing corpora, humans have the unique capability to establish and accept new language systems. This makes human form unique language systems within social groups. Aligning with this, we focus on a gap remaining in addressing translation challenges within social groups, where in-group members utilize unique terminologies. We propose KpopMT dataset, which aims to fill this gap by enabling precise terminology translation, choosing Kpop fandom as an initiative for social groups given its global popularity. Expert translators provide 1k English translations for Korean posts and comments, each annotated with specific terminology within social groups{'} language systems. We evaluate existing translation systems including GPT models on KpopMT to identify their failure cases. Results show overall low scores, underscoring the challenges of reflecting group-specific terminologies and styles in translation. We make KpopMT publicly available.
|
[
"Kim, JiWoo",
"Kim, Yunsu",
"Bak, JinYeong"
] |
{K}pop{MT}: Translation Dataset with Terminology for Kpop Fandom
|
loresmt-1.3
|
Poster
|
2407.07413v1
|
https://aclanthology.org/2024.loresmt-1.4.bib
|
@inproceedings{basit-etal-2024-challenges,
title = "Challenges in {U}rdu Machine Translation",
author = "Basit, Abdul and
Azeemi, Abdul Hameed and
Raza, Agha Ali",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.4",
pages = "44--49",
abstract = "Recent advancements in Neural Machine Translation (NMT) systems have significantly improved model performance on various translation benchmarks. However, these systems still face numerous challenges when translating low-resource languages such as Urdu. In this work, we highlight the specific issues faced by machine translation systems when translating Urdu language. We first conduct a comprehensive evaluation of English to Urdu Machine Translation with four diverse models: GPT-3.5 (a large language model), opus-mt-en-ur (a bilingual translation model), NLLB (a model trained for translating 200 languages), and IndicTrans2 (a specialized model for translating low-resource Indic languages). The results demonstrate that IndicTrans2 significantly outperforms other models in Urdu Machine Translation. To understand the differences in the performance of these models, we analyze the Urdu word distribution in different training datasets and compare the training methodologies. Finally, we uncover the specific translation issues and provide suggestions for improvements in Urdu machine translation systems.",
}
|
Recent advancements in Neural Machine Translation (NMT) systems have significantly improved model performance on various translation benchmarks. However, these systems still face numerous challenges when translating low-resource languages such as Urdu. In this work, we highlight the specific issues faced by machine translation systems when translating Urdu language. We first conduct a comprehensive evaluation of English to Urdu Machine Translation with four diverse models: GPT-3.5 (a large language model), opus-mt-en-ur (a bilingual translation model), NLLB (a model trained for translating 200 languages), and IndicTrans2 (a specialized model for translating low-resource Indic languages). The results demonstrate that IndicTrans2 significantly outperforms other models in Urdu Machine Translation. To understand the differences in the performance of these models, we analyze the Urdu word distribution in different training datasets and compare the training methodologies. Finally, we uncover the specific translation issues and provide suggestions for improvements in Urdu machine translation systems.
|
[
"Basit, Abdul",
"Azeemi, Abdul Hameed",
"Raza, Agha Ali"
] |
Challenges in {U}rdu Machine Translation
|
loresmt-1.4
|
Poster
|
1712.02959v1
|
https://aclanthology.org/2024.loresmt-1.5.bib
|
@inproceedings{varanasi-etal-2024-linguistically,
title = "Linguistically Informed Transformers for Text to {A}merican {S}ign {L}anguage Translation",
author = "Varanasi, Abhishek and
Sinha, Manjira and
Dasgupta, Tirthankar",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.5",
pages = "50--56",
abstract = "In this paper we propose a framework for automatic translation of English text to American Sign Language (ASL) which leverages a linguistically informed transformer model to translate English sentences into ASL gloss sequences. These glosses are then associated with respective ASL videos, effectively representing English text in ASL. To facilitate experimentation, we create an English-ASL parallel dataset on banking domain.Our preliminary results demonstrated that the linguistically informed transformer model achieves a 97.83{\%} ROUGE-L score for text-to-gloss translation on the ASLG-PC12 dataset. Furthermore, fine-tuning the transformer model on the banking domain dataset yields an 89.47{\%} ROUGE-L score when fine-tuned on ASLG-PC12 + banking domain dataset. These results demonstrate the effectiveness of the linguistically informed model for both general and domain-specific translations. To facilitate parallel dataset generation in banking-domain, we choose ASL despite having limited benchmarks and data corpus compared to some of the other sign languages.",
}
|
In this paper we propose a framework for automatic translation of English text to American Sign Language (ASL) which leverages a linguistically informed transformer model to translate English sentences into ASL gloss sequences. These glosses are then associated with respective ASL videos, effectively representing English text in ASL. To facilitate experimentation, we create an English-ASL parallel dataset on banking domain.Our preliminary results demonstrated that the linguistically informed transformer model achieves a 97.83{\%} ROUGE-L score for text-to-gloss translation on the ASLG-PC12 dataset. Furthermore, fine-tuning the transformer model on the banking domain dataset yields an 89.47{\%} ROUGE-L score when fine-tuned on ASLG-PC12 + banking domain dataset. These results demonstrate the effectiveness of the linguistically informed model for both general and domain-specific translations. To facilitate parallel dataset generation in banking-domain, we choose ASL despite having limited benchmarks and data corpus compared to some of the other sign languages.
|
[
"Varanasi, Abhishek",
"Sinha, Manjira",
"Dasgupta, Tirthankar"
] |
Linguistically Informed Transformers for Text to {A}merican {S}ign {L}anguage Translation
|
loresmt-1.5
|
Poster
|
2004.00588v2
|
https://aclanthology.org/2024.loresmt-1.6.bib
|
@inproceedings{park-etal-2024-low,
title = "Low-Resource Cross-Lingual Summarization through Few-Shot Learning with Large Language Models",
author = "Park, Gyutae and
Hwang, Seojin and
Lee, Hwanhee",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.6",
pages = "57--63",
abstract = "Cross-lingual summarization (XLS) aims to generate a summary in a target language different from the source language document. While large language models (LLMs) have shown promising zero-shot XLS performance, their few-shot capabilities on this task remain unexplored, especially for low-resource languages with limited parallel data. In this paper, we investigate the few-shot XLS performance of various models, including Mistral-7B-Instruct-v0.2, GPT-3.5, and GPT-4.Our experiments demonstrate that few-shot learning significantly improves the XLS performance of LLMs, particularly GPT-3.5 and GPT-4, in low-resource settings. However, the open-source model Mistral-7B-Instruct-v0.2 struggles to adapt effectively to the XLS task with limited examples. Our findings highlight the potential of few-shot learning for improving XLS performance and the need for further research in designing LLM architectures and pre-training objectives tailored for this task. We provide a future work direction to explore more effective few-shot learning strategies and to investigate the transfer learning capabilities of LLMs for cross-lingual summarization.",
}
|
Cross-lingual summarization (XLS) aims to generate a summary in a target language different from the source language document. While large language models (LLMs) have shown promising zero-shot XLS performance, their few-shot capabilities on this task remain unexplored, especially for low-resource languages with limited parallel data. In this paper, we investigate the few-shot XLS performance of various models, including Mistral-7B-Instruct-v0.2, GPT-3.5, and GPT-4.Our experiments demonstrate that few-shot learning significantly improves the XLS performance of LLMs, particularly GPT-3.5 and GPT-4, in low-resource settings. However, the open-source model Mistral-7B-Instruct-v0.2 struggles to adapt effectively to the XLS task with limited examples. Our findings highlight the potential of few-shot learning for improving XLS performance and the need for further research in designing LLM architectures and pre-training objectives tailored for this task. We provide a future work direction to explore more effective few-shot learning strategies and to investigate the transfer learning capabilities of LLMs for cross-lingual summarization.
|
[
"Park, Gyutae",
"Hwang, Seojin",
"Lee, Hwanhee"
] |
Low-Resource Cross-Lingual Summarization through Few-Shot Learning with Large Language Models
|
loresmt-1.6
|
Poster
|
1712.01813v1
|
https://aclanthology.org/2024.loresmt-1.7.bib
|
@inproceedings{roy-etal-2024-enhancing,
title = "Enhancing Low-Resource {NMT} with a Multilingual Encoder and Knowledge Distillation: A Case Study",
author = "Roy, Aniruddha and
Ray, Pretam and
Maheshwari, Ayush and
Sarkar, Sudeshna and
Goyal, Pawan",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.7",
pages = "64--73",
abstract = "Neural Machine Translation (NMT) remains a formidable challenge, especially when dealing with low-resource languages. Pre-trained sequence-to-sequence (seq2seq) multi-lingual models, such as mBART-50, have demonstrated impressive performance in various low-resource NMT tasks. However, their pre-training has been confined to 50 languages, leaving out support for numerous low-resource languages, particularly those spoken in the Indian subcontinent. Expanding mBART-50{'}s language support requires complex pre-training, risking performance decline due to catastrophic forgetting. Considering these expanding challenges, this paper explores a framework that leverages the benefits of a pre-trained language model along with knowledge distillation in a seq2seq architecture to facilitate translation for low-resource languages, including those not covered by mBART-50. The proposed framework employs a multilingual encoder-based seq2seq model as the foundational architecture and subsequently uses complementary knowledge distillation techniques to mitigate the impact of imbalanced training. Our framework is evaluated on three low-resource Indic languages in four Indic-to-Indic directions, yielding significant BLEU-4 and chrF improvements over baselines. Further, we conduct human evaluation to confirm effectiveness of our approach. Our code is publicly available at https://github.com/raypretam/Two-step-low-res-NMT.",
}
|
Neural Machine Translation (NMT) remains a formidable challenge, especially when dealing with low-resource languages. Pre-trained sequence-to-sequence (seq2seq) multi-lingual models, such as mBART-50, have demonstrated impressive performance in various low-resource NMT tasks. However, their pre-training has been confined to 50 languages, leaving out support for numerous low-resource languages, particularly those spoken in the Indian subcontinent. Expanding mBART-50{'}s language support requires complex pre-training, risking performance decline due to catastrophic forgetting. Considering these expanding challenges, this paper explores a framework that leverages the benefits of a pre-trained language model along with knowledge distillation in a seq2seq architecture to facilitate translation for low-resource languages, including those not covered by mBART-50. The proposed framework employs a multilingual encoder-based seq2seq model as the foundational architecture and subsequently uses complementary knowledge distillation techniques to mitigate the impact of imbalanced training. Our framework is evaluated on three low-resource Indic languages in four Indic-to-Indic directions, yielding significant BLEU-4 and chrF improvements over baselines. Further, we conduct human evaluation to confirm effectiveness of our approach. Our code is publicly available at https://github.com/raypretam/Two-step-low-res-NMT.
|
[
"Roy, Aniruddha",
"Ray, Pretam",
"Maheshwari, Ayush",
"Sarkar, Sudeshna",
"Goyal, Pawan"
] |
Enhancing Low-Resource {NMT} with a Multilingual Encoder and Knowledge Distillation: A Case Study
|
loresmt-1.7
|
Poster
|
2407.06538v1
|
https://aclanthology.org/2024.loresmt-1.8.bib
|
@inproceedings{suen-etal-2024-leveraging,
title = "Leveraging {M}andarin as a Pivot Language for Low-Resource Machine Translation between {C}antonese and {E}nglish",
author = "Suen, King and
Chow, Rudolf and
Lam, Albert",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.8",
pages = "74--84",
abstract = "Cantonese, the second most prevalent Chinese dialect after Mandarin, has been relatively overlooked in machine translation (MT) due to a scarcity of bilingual resources. In this paper, we propose to leverage Mandarin, a high-resource language, as a pivot language for translating between Cantonese and English. Our method utilizes transfer learning from pre-trained Bidirectional and Auto-Regressive Transformer (BART) models to initialize auxiliary source-pivot and pivot-target MT models. The parameters of the trained auxiliary models are then used to initialize the source-target model. Based on our experiments, our proposed method outperforms several baseline initialization strategies, naive pivot translation, and two commercial translation systems in both translation directions.",
}
|
Cantonese, the second most prevalent Chinese dialect after Mandarin, has been relatively overlooked in machine translation (MT) due to a scarcity of bilingual resources. In this paper, we propose to leverage Mandarin, a high-resource language, as a pivot language for translating between Cantonese and English. Our method utilizes transfer learning from pre-trained Bidirectional and Auto-Regressive Transformer (BART) models to initialize auxiliary source-pivot and pivot-target MT models. The parameters of the trained auxiliary models are then used to initialize the source-target model. Based on our experiments, our proposed method outperforms several baseline initialization strategies, naive pivot translation, and two commercial translation systems in both translation directions.
|
[
"Suen, King",
"Chow, Rudolf",
"Lam, Albert"
] |
Leveraging {M}andarin as a Pivot Language for Low-Resource Machine Translation between {C}antonese and {E}nglish
|
loresmt-1.8
|
Poster
|
2301.03971v1
|
https://aclanthology.org/2024.loresmt-1.9.bib
|
@inproceedings{behrooznia-etal-2024-enhancing,
title = "Enhancing {T}urkish Word Segmentation: A Focus on Borrowed Words and Invalid Morpheme",
author = "Behrooznia, Soheila and
Ansari, Ebrahim and
Zabokrtsky, Zdenek",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.9",
pages = "85--93",
abstract = "This study addresses a challenge in morphological segmentation: accurately segmenting words in languages with rich morphology. Current probabilistic methods, such as Morfessor, often produce results that lack consistency with human-segmented words. Our study adds some steps to the Morfessor segmentation process to consider invalid morphemes and borrowed words from other languages to improve morphological segmentation significantly. Comparing our idea to the results obtained from Morfessor demonstrates its efficiency, leading to more accurate morphology segmentation. This is particularly evident in the case of Turkish, highlighting the potential for further advancements in morpheme segmentation for morphologically rich languages.",
}
|
This study addresses a challenge in morphological segmentation: accurately segmenting words in languages with rich morphology. Current probabilistic methods, such as Morfessor, often produce results that lack consistency with human-segmented words. Our study adds some steps to the Morfessor segmentation process to consider invalid morphemes and borrowed words from other languages to improve morphological segmentation significantly. Comparing our idea to the results obtained from Morfessor demonstrates its efficiency, leading to more accurate morphology segmentation. This is particularly evident in the case of Turkish, highlighting the potential for further advancements in morpheme segmentation for morphologically rich languages.
|
[
"Behrooznia, Soheila",
"Ansari, Ebrahim",
"Zabokrtsky, Zdenek"
] |
Enhancing {T}urkish Word Segmentation: A Focus on Borrowed Words and Invalid Morpheme
|
loresmt-1.9
|
Poster
|
1503.02335v1
|
https://aclanthology.org/2024.loresmt-1.10.bib
|
@inproceedings{protasov-etal-2024-super,
title = "Super donors and super recipients: Studying cross-lingual transfer between high-resource and low-resource languages",
author = "Protasov, Vitaly and
Stakovskii, Elisei and
Voloshina, Ekaterina and
Shavrina, Tatiana and
Panchenko, Alexander",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.10",
pages = "94--108",
abstract = "Despite the increasing popularity of multilingualism within the NLP community, numerous languages continue to be underrepresented due to the lack of available resources.Our work addresses this gap by introducing experiments on cross-lingual transfer between 158 high-resource (HR) and 31 low-resource (LR) languages.We mainly focus on extremely LR languages, some of which are first presented in research works.Across $158*31$ HR{--}LR language pairs, we investigate how continued pretraining on different HR languages affects the mT5 model{'}s performance in representing LR languages in the LM setup.Our findings surprisingly reveal that the optimal language pairs with improved performance do not necessarily align with direct linguistic motivations, with subtoken overlap playing a more crucial role. Our investigation indicates that specific languages tend to be almost universally beneficial for pretraining (\textit{super donors}), while others benefit from pretraining with almost any language (\textit{super recipients}). This pattern recurs in various setups and is unrelated to the linguistic similarity of HR-LR pairs.Furthermore, we perform evaluation on two downstream tasks, part-of-speech (POS) tagging and machine translation (MT), showing how HR pretraining affects LR language performance.",
}
|
Despite the increasing popularity of multilingualism within the NLP community, numerous languages continue to be underrepresented due to the lack of available resources.Our work addresses this gap by introducing experiments on cross-lingual transfer between 158 high-resource (HR) and 31 low-resource (LR) languages.We mainly focus on extremely LR languages, some of which are first presented in research works.Across $158*31$ HR{--}LR language pairs, we investigate how continued pretraining on different HR languages affects the mT5 model{'}s performance in representing LR languages in the LM setup.Our findings surprisingly reveal that the optimal language pairs with improved performance do not necessarily align with direct linguistic motivations, with subtoken overlap playing a more crucial role. Our investigation indicates that specific languages tend to be almost universally beneficial for pretraining (\textit{super donors}), while others benefit from pretraining with almost any language (\textit{super recipients}). This pattern recurs in various setups and is unrelated to the linguistic similarity of HR-LR pairs.Furthermore, we perform evaluation on two downstream tasks, part-of-speech (POS) tagging and machine translation (MT), showing how HR pretraining affects LR language performance.
|
[
"Protasov, Vitaly",
"Stakovskii, Elisei",
"Voloshina, Ekaterina",
"Shavrina, Tatiana",
"Panchenko, Alex",
"er"
] |
Super donors and super recipients: Studying cross-lingual transfer between high-resource and low-resource languages
|
loresmt-1.10
|
Poster
|
2307.04600v2
|
https://aclanthology.org/2024.loresmt-1.11.bib
|
@inproceedings{abela-etal-2024-tokenisation,
title = "Tokenisation in Machine Translation Does Matter: The impact of different tokenisation approaches for {M}altese",
author = "Abela, Kurt and
Micallef, Kurt and
Tanti, Marc and
Borg, Claudia",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.11",
pages = "109--120",
abstract = "In Machine Translation, various tokenisers are used to segment inputs before training a model. Despite tokenisation being mostly considered a solved problem for languages such as English, it is still unclear as to how effective different tokenisers are for morphologically rich languages. This study aims to explore how different approaches to tokenising Maltese impact machine translation results on the English-Maltese language pair.We observed that the OPUS-100 dataset has tokenisation inconsistencies in Maltese. We empirically found that training models on the original OPUS-100 dataset led to the generation of sentences with these issues.We therefore release an updated version of the OPUS-100 parallel English-Maltese dataset, referred to as OPUS-100-Fix, fixing these inconsistencies in Maltese by using the MLRS tokeniser. We show that after fixing the inconsistencies in the dataset, results on the fixed test set increase by 2.49 BLEU points over models trained on the original OPUS-100. We also experiment with different tokenisers, including BPE and SentencePiece to find the ideal tokeniser and vocabulary size for our setup, which was shown to be BPE with a vocabulary size of 8,000. Finally, we train different models in both directions for the ENG-MLT language pair using OPUS-100-Fix by training models from scratch as well as fine-tuning other pre-trained models, namely mBART-50 and NLLB, where a finetuned NLLB model performed the best.",
}
|
In Machine Translation, various tokenisers are used to segment inputs before training a model. Despite tokenisation being mostly considered a solved problem for languages such as English, it is still unclear as to how effective different tokenisers are for morphologically rich languages. This study aims to explore how different approaches to tokenising Maltese impact machine translation results on the English-Maltese language pair.We observed that the OPUS-100 dataset has tokenisation inconsistencies in Maltese. We empirically found that training models on the original OPUS-100 dataset led to the generation of sentences with these issues.We therefore release an updated version of the OPUS-100 parallel English-Maltese dataset, referred to as OPUS-100-Fix, fixing these inconsistencies in Maltese by using the MLRS tokeniser. We show that after fixing the inconsistencies in the dataset, results on the fixed test set increase by 2.49 BLEU points over models trained on the original OPUS-100. We also experiment with different tokenisers, including BPE and SentencePiece to find the ideal tokeniser and vocabulary size for our setup, which was shown to be BPE with a vocabulary size of 8,000. Finally, we train different models in both directions for the ENG-MLT language pair using OPUS-100-Fix by training models from scratch as well as fine-tuning other pre-trained models, namely mBART-50 and NLLB, where a finetuned NLLB model performed the best.
|
[
"Abela, Kurt",
"Micallef, Kurt",
"Tanti, Marc",
"Borg, Claudia"
] |
Tokenisation in Machine Translation Does Matter: The impact of different tokenisation approaches for {M}altese
|
loresmt-1.11
|
Poster
|
2109.02550v2
|
https://aclanthology.org/2024.loresmt-1.12.bib
|
@inproceedings{cadotte-etal-2024-machine,
title = "Machine Translation Through Cultural Texts: Can Verses and Prose Help Low-Resource Indigenous Models?",
author = "Cadotte, Antoine and
Andr{\'e}, Nathalie and
Sadat, Fatiha",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.12",
pages = "121--127",
abstract = "We propose the first MT models for Innu-Aimun, an Indigenous language in Eastern Canada, in an effort to provide assistance tools for translation and language learning. This project is carried out in collaboration with an Innu community school and involves the participation of students in Innu-Aimun translation, within the framework of a meaningful consideration of Indigenous perspectives.Our contributions in this paper result from the three initial stages of this project. First, we aim to align bilingual Innu-Aimun/French texts with collaboration and participation of Innu-Aimun locutors. Second, we present the training and evaluation results of the MT models (both statistical and neural) based on these aligned corpora. And third, we collaboratively analyze some of the translations resulting from the MT models.We also see these developments for Innu-Aimun as a useful case study for answering a larger question: in a context where few aligned bilingual sentences are available for an Indigenous language, can cultural texts such as literature and poetry be used in the development of MT models?",
}
|
We propose the first MT models for Innu-Aimun, an Indigenous language in Eastern Canada, in an effort to provide assistance tools for translation and language learning. This project is carried out in collaboration with an Innu community school and involves the participation of students in Innu-Aimun translation, within the framework of a meaningful consideration of Indigenous perspectives.Our contributions in this paper result from the three initial stages of this project. First, we aim to align bilingual Innu-Aimun/French texts with collaboration and participation of Innu-Aimun locutors. Second, we present the training and evaluation results of the MT models (both statistical and neural) based on these aligned corpora. And third, we collaboratively analyze some of the translations resulting from the MT models.We also see these developments for Innu-Aimun as a useful case study for answering a larger question: in a context where few aligned bilingual sentences are available for an Indigenous language, can cultural texts such as literature and poetry be used in the development of MT models?
|
[
"Cadotte, Antoine",
"Andr{\\'e}, Nathalie",
"Sadat, Fatiha"
] |
Machine Translation Through Cultural Texts: Can Verses and Prose Help Low-Resource Indigenous Models?
|
loresmt-1.12
|
Poster
|
1802.09685v1
|
https://aclanthology.org/2024.loresmt-1.13.bib
|
@inproceedings{frontull-moser-2024-rule,
title = "Rule-Based, Neural and {LLM} Back-Translation: Comparative Insights from a Variant of {L}adin",
author = "Frontull, Samuel and
Moser, Georg",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.13",
pages = "128--138",
abstract = "This paper explores the impact of different back-translation approaches on machine translation for Ladin, specifically the Val Badia variant. Given the limited amount of parallel data available for this language (only 18k Ladin-Italian sentence pairs), we investigate the performance of a multilingual neural machine translation model fine-tuned for Ladin-Italian. In addition to the available authentic data, we synthesise further translations by using three different models: a fine-tuned neural model, a rule-based system developed specifically for this language pair, and a large language model. Our experiments show that all approaches achieve comparable translation quality in this low-resource scenario, yet round-trip translations highlight differences in model performance.",
}
|
This paper explores the impact of different back-translation approaches on machine translation for Ladin, specifically the Val Badia variant. Given the limited amount of parallel data available for this language (only 18k Ladin-Italian sentence pairs), we investigate the performance of a multilingual neural machine translation model fine-tuned for Ladin-Italian. In addition to the available authentic data, we synthesise further translations by using three different models: a fine-tuned neural model, a rule-based system developed specifically for this language pair, and a large language model. Our experiments show that all approaches achieve comparable translation quality in this low-resource scenario, yet round-trip translations highlight differences in model performance.
|
[
"Frontull, Samuel",
"Moser, Georg"
] |
Rule-Based, Neural and {LLM} Back-Translation: Comparative Insights from a Variant of {L}adin
|
loresmt-1.13
|
Poster
|
2407.08819v1
|
https://aclanthology.org/2024.loresmt-1.14.bib
|
@inproceedings{ademtew-birbo-2024-age,
title = "{AGE}: {A}mharic, {G}e{'}ez and {E}nglish Parallel Dataset",
author = "Ademtew, Henok and
Birbo, Mikiyas",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.14",
pages = "139--145",
abstract = "African languages are not well-represented in Natural Language Processing (NLP). The main reason is a lack of resources for training models. Low-resource languages, such as Amharic and Ge{'}ez, cannot benefit from modern NLP methods because of the lack of high-quality datasets. This paper presents AGE, an open-source tripartite alignment of Amharic, Ge{'}ez, and English parallel dataset. Additionally, we introduced a novel, 1,000 Ge{'}ez-centered sentences sourced from areas such as news and novels. Furthermore, we developed a model from a multilingual pre-trained language model, which brings 12.29 and 30.66 for English-Ge{'}ez and Ge{'}ez to English, respectively, and 9.39 and 12.29 for Amharic-Ge{'}ez and Ge{'}ez-Amharic respectively.",
}
|
African languages are not well-represented in Natural Language Processing (NLP). The main reason is a lack of resources for training models. Low-resource languages, such as Amharic and Ge{'}ez, cannot benefit from modern NLP methods because of the lack of high-quality datasets. This paper presents AGE, an open-source tripartite alignment of Amharic, Ge{'}ez, and English parallel dataset. Additionally, we introduced a novel, 1,000 Ge{'}ez-centered sentences sourced from areas such as news and novels. Furthermore, we developed a model from a multilingual pre-trained language model, which brings 12.29 and 30.66 for English-Ge{'}ez and Ge{'}ez to English, respectively, and 9.39 and 12.29 for Amharic-Ge{'}ez and Ge{'}ez-Amharic respectively.
|
[
"Ademtew, Henok",
"Birbo, Mikiyas"
] |
{AGE}: {A}mharic, {G}e{'}ez and {E}nglish Parallel Dataset
|
loresmt-1.14
|
Poster
|
2311.14530v3
|
https://aclanthology.org/2024.loresmt-1.15.bib
|
@inproceedings{liao-etal-2024-learning,
title = "Learning-From-Mistakes Prompting for Indigenous Language Translation",
author = "Liao, You Cheng and
Yu, Chen-Jui and
Lin, Chi-Yi and
Yun, He-Feng and
Wang, Yen-Hsiang and
Li, Hsiao-Min and
Fan, Yao-Chung",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.15",
pages = "146--158",
abstract = "Using large language models, this paper presents techniques to improve extremely low-resourced indigenous language translations. Our approaches are grounded in the use of (1) the presence of a datastore consisting of a limited number of parallel translation examples, (2) the inherent capabilities of LLMs like GPT-3.5, and (3) a word-level translation dictionary. We harness the potential of LLMs and in-context learning techniques in such a setting for using LLM as universal translators for extremely low-resourced languages. Our methodology hinges on utilizing LLMs as language compilers for selected language pairs, hypothesizing that they could internalize syntactic structures to facilitate accurate translation. We introduce three techniques: KNN-Prompting with Retrieved Prompting Context, Chain-of-Thought Prompting, and Learning-from-Mistakes Prompting, with the last method addressing past errors. The evaluation results suggest that, even with limited corpora, LLMs, when paired with proper prompting, can effectively translate extremely low-resource languages.",
}
|
Using large language models, this paper presents techniques to improve extremely low-resourced indigenous language translations. Our approaches are grounded in the use of (1) the presence of a datastore consisting of a limited number of parallel translation examples, (2) the inherent capabilities of LLMs like GPT-3.5, and (3) a word-level translation dictionary. We harness the potential of LLMs and in-context learning techniques in such a setting for using LLM as universal translators for extremely low-resourced languages. Our methodology hinges on utilizing LLMs as language compilers for selected language pairs, hypothesizing that they could internalize syntactic structures to facilitate accurate translation. We introduce three techniques: KNN-Prompting with Retrieved Prompting Context, Chain-of-Thought Prompting, and Learning-from-Mistakes Prompting, with the last method addressing past errors. The evaluation results suggest that, even with limited corpora, LLMs, when paired with proper prompting, can effectively translate extremely low-resource languages.
|
[
"Liao, You Cheng",
"Yu, Chen-Jui",
"Lin, Chi-Yi",
"Yun, He-Feng",
"Wang, Yen-Hsiang",
"Li, Hsiao-Min",
"Fan, Yao-Chung"
] |
Learning-From-Mistakes Prompting for Indigenous Language Translation
|
loresmt-1.15
|
Poster
|
2407.13343v1
|
https://aclanthology.org/2024.loresmt-1.16.bib
|
@inproceedings{al-amer-etal-2024-adopting,
title = "Adopting Ensemble Learning for Cross-lingual Classification of Crisis-related Text On Social Media",
author = "Al Amer, Shareefa and
Lee, Mark and
Smith, Phillip",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.16",
pages = "159--165",
abstract = "Cross-lingual classification poses a significant challenge in Natural Language Processing (NLP), especially when dealing with languages with scarce training data. This paper delves into the adaptation of ensemble learning to address this challenge, specifically for disaster-related social media texts. Initially, we employ Machine Translation to generate a parallel corpus in the target language to mitigate the issue of data scarcity and foster a robust training environment. Following this, we implement the bagging ensemble technique, integrating multiple classifiers into a cohesive model that demonstrates enhanced performance over individual classifiers. Our experimental results reveal significant improvements in adapting models for Arabic, utilising only English training data and markedly outperforming models intended for linguistically similar languages to English, with our ensemble model achieving an accuracy and F1 score of 0.78 when tested on original Arabic data. This research makes a substantial contribution to the field of cross-lingual classification, establishing a new benchmark for enhancing the effectiveness of language transfer in linguistically challenging scenarios.",
}
|
Cross-lingual classification poses a significant challenge in Natural Language Processing (NLP), especially when dealing with languages with scarce training data. This paper delves into the adaptation of ensemble learning to address this challenge, specifically for disaster-related social media texts. Initially, we employ Machine Translation to generate a parallel corpus in the target language to mitigate the issue of data scarcity and foster a robust training environment. Following this, we implement the bagging ensemble technique, integrating multiple classifiers into a cohesive model that demonstrates enhanced performance over individual classifiers. Our experimental results reveal significant improvements in adapting models for Arabic, utilising only English training data and markedly outperforming models intended for linguistically similar languages to English, with our ensemble model achieving an accuracy and F1 score of 0.78 when tested on original Arabic data. This research makes a substantial contribution to the field of cross-lingual classification, establishing a new benchmark for enhancing the effectiveness of language transfer in linguistically challenging scenarios.
|
[
"Al Amer, Shareefa",
"Lee, Mark",
"Smith, Phillip"
] |
Adopting Ensemble Learning for Cross-lingual Classification of Crisis-related Text On Social Media
|
loresmt-1.16
|
Poster
|
2104.00124v2
|
https://aclanthology.org/2024.loresmt-1.17.bib
|
@inproceedings{sildam-etal-2024-finetuning,
title = "Finetuning End-to-End Models for {E}stonian Conversational Spoken Language Translation",
author = {Sildam, Tiia and
Velve, Andra and
Alum{\"a}e, Tanel},
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.17",
pages = "166--174",
abstract = "This paper investigates the finetuning of end-to-end models for bidirectional Estonian-English and Estonian-Russian conversational speech-to-text translation. Due to the limited availability of speech translation data for Estonian, we created additional training data by web scraping and synthesizing data from speech recognition datasets using machine translation. We evaluated three publicly available end-to-end models: Whisper, OWSM 3.1, and SeamlessM4T. Our results indicate that fine-tuning with synthetic data enhances translation accuracy by a large margin, with SeamlessM4T matching or surpassing cascaded speech translation systems that use state-of-the-art speech recognition and machine translation models.",
}
|
This paper investigates the finetuning of end-to-end models for bidirectional Estonian-English and Estonian-Russian conversational speech-to-text translation. Due to the limited availability of speech translation data for Estonian, we created additional training data by web scraping and synthesizing data from speech recognition datasets using machine translation. We evaluated three publicly available end-to-end models: Whisper, OWSM 3.1, and SeamlessM4T. Our results indicate that fine-tuning with synthetic data enhances translation accuracy by a large margin, with SeamlessM4T matching or surpassing cascaded speech translation systems that use state-of-the-art speech recognition and machine translation models.
|
[
"Sildam, Tiia",
"Velve, Andra",
"Alum{\\\"a}e, Tanel"
] |
Finetuning End-to-End Models for {E}stonian Conversational Spoken Language Translation
|
loresmt-1.17
|
Poster
|
2407.03809v1
|
https://aclanthology.org/2024.loresmt-1.18.bib
|
@inproceedings{silva-etal-2024-benchmarking,
title = "Benchmarking Low-Resource Machine Translation Systems",
author = {Silva, Ana and
Srivastava, Nikit and
Moteu Ngoli, Tatiana and
R{\"o}der, Michael and
Moussallem, Diego and
Ngonga Ngomo, Axel-Cyrille},
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.18",
pages = "175--185",
abstract = "Assessing the performance of machine translation systems is of critical value, especially to languages with lower resource availability.Due to the large evaluation effort required by the translation task, studies often compare new systems against single systems or commercial solutions. Consequently, determining the best-performing system for specific languages is often unclear. This work benchmarks publicly available translation systems across 4 datasets and 26 languages, including low-resource languages. We consider both effectiveness and efficiency in our evaluation.Our results are made public through BENG{---}a FAIR benchmarking platform for Natural Language Generation tasks.",
}
|
Assessing the performance of machine translation systems is of critical value, especially to languages with lower resource availability.Due to the large evaluation effort required by the translation task, studies often compare new systems against single systems or commercial solutions. Consequently, determining the best-performing system for specific languages is often unclear. This work benchmarks publicly available translation systems across 4 datasets and 26 languages, including low-resource languages. We consider both effectiveness and efficiency in our evaluation.Our results are made public through BENG{---}a FAIR benchmarking platform for Natural Language Generation tasks.
|
[
"Silva, Ana",
"Srivastava, Nikit",
"Moteu Ngoli, Tatiana",
"R{\\\"o}der, Michael",
"Moussallem, Diego",
"Ngonga Ngomo, Axel-Cyrille"
] |
Benchmarking Low-Resource Machine Translation Systems
|
loresmt-1.18
|
Poster
|
2207.14473v1
|
https://aclanthology.org/2024.loresmt-1.19.bib
|
@inproceedings{begoli-etal-2024-rosetta,
title = "Rosetta Balcanica: Deriving a {``}Gold Standard{''} Neural Machine Translation ({NMT}) Parallel Dataset from High-Fidelity Resources for {W}estern {B}alkan Languages",
author = "Begoli, Edmon and
Mahbub, Maria and
Srinivasan, Sudarshan",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.19",
pages = "186--192",
abstract = "The Rosetta Balcanica is an ongoing effort in resource expansion for low-resource Western Balkans languages. This effort focuses on discovering and using accurately translated, officially mapped, and curated parallel language resources and their preparation and use as neural machine translation (NMT) datasets. Some of the guiding principles, practices, and methods employed by Rosetta Balcanica are generalizable and could apply to other low-resource language resource expansion efforts. With this goal in mind, we present our rationale and approach to discovering and using meticulously translated and officially curated low-resource language resources and our use of these resources to develop a parallel {``}gold standard{''} translation training resource. Secondly, we describe our specific methodology for NMT dataset development from these resources and its publication to a widely-used and accessible repository for natural language processing (\textit{Hugging Face Hub}). Finally, we discuss the trade-offs and limitations of our current approach, and the roadmap for future development and the expansion of the current Rosetta Balcanica language resource.",
}
|
The Rosetta Balcanica is an ongoing effort in resource expansion for low-resource Western Balkans languages. This effort focuses on discovering and using accurately translated, officially mapped, and curated parallel language resources and their preparation and use as neural machine translation (NMT) datasets. Some of the guiding principles, practices, and methods employed by Rosetta Balcanica are generalizable and could apply to other low-resource language resource expansion efforts. With this goal in mind, we present our rationale and approach to discovering and using meticulously translated and officially curated low-resource language resources and our use of these resources to develop a parallel {``}gold standard{''} translation training resource. Secondly, we describe our specific methodology for NMT dataset development from these resources and its publication to a widely-used and accessible repository for natural language processing (\textit{Hugging Face Hub}). Finally, we discuss the trade-offs and limitations of our current approach, and the roadmap for future development and the expansion of the current Rosetta Balcanica language resource.
|
[
"Begoli, Edmon",
"Mahbub, Maria",
"Srinivasan, Sudarshan"
] |
Rosetta Balcanica: Deriving a {``}Gold Standard{''} Neural Machine Translation ({NMT}) Parallel Dataset from High-Fidelity Resources for {W}estern {B}alkan Languages
|
loresmt-1.19
|
Poster
|
2004.03137v3
|
https://aclanthology.org/2024.loresmt-1.20.bib
|
@inproceedings{tran-etal-2024-irish,
title = "{I}rish-based Large Language Model with Extreme Low-Resource Settings in Machine Translation",
author = "Tran, Khanh-Tung and
O{'}Sullivan, Barry and
Nguyen, Hoang",
editor = "Ojha, Atul Kr. and
Liu, Chao-hong and
Vylomova, Ekaterina and
Pirinen, Flammie and
Abbott, Jade and
Washington, Jonathan and
Oco, Nathaniel and
Malykh, Valentin and
Logacheva, Varvara and
Zhao, Xiaobing",
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.loresmt-1.20",
pages = "193--202",
abstract = "Large Language Models (LLMs) have demonstrated exceptional performances in a wide range of natural language processing tasks. However, their success does not always extend to machine translation, particularly in challenging scenarios such as translating low-resource languages. This study investigates the multilingual capability of LLMs, with a case study on Irish, an extremely low-resource language, focusing on translation tasks between English and Irish. We propose a dynamic, efficient language adaptation framework for English-centric LLMs, which involves layer-specific adjustments and subsequent fine-tuning for machine translation. Our findings highlight several key insights: (1) different layers in the LLM serve distinct functions such as language understanding and task reasoning, (2) effective translation requires extensive pre-training on both source and target languages, and (3) targeted fine-tuning for machine translation leads to significant improvements of 36.7{\%} for English to Irish and 133.4{\%} for Irish to English compared to the previous state-of-the-art.",
}
|
Large Language Models (LLMs) have demonstrated exceptional performances in a wide range of natural language processing tasks. However, their success does not always extend to machine translation, particularly in challenging scenarios such as translating low-resource languages. This study investigates the multilingual capability of LLMs, with a case study on Irish, an extremely low-resource language, focusing on translation tasks between English and Irish. We propose a dynamic, efficient language adaptation framework for English-centric LLMs, which involves layer-specific adjustments and subsequent fine-tuning for machine translation. Our findings highlight several key insights: (1) different layers in the LLM serve distinct functions such as language understanding and task reasoning, (2) effective translation requires extensive pre-training on both source and target languages, and (3) targeted fine-tuning for machine translation leads to significant improvements of 36.7{\%} for English to Irish and 133.4{\%} for Irish to English compared to the previous state-of-the-art.
|
[
"Tran, Khanh-Tung",
"O{'}Sullivan, Barry",
"Nguyen, Hoang"
] |
{I}rish-based Large Language Model with Extreme Low-Resource Settings in Machine Translation
|
loresmt-1.20
|
Poster
|
2206.14982v1
|
https://aclanthology.org/2024.ml4al-1.1.bib
|
@inproceedings{pavlopoulos-etal-2024-challenging,
title = "Challenging Error Correction in Recognised Byzantine {G}reek",
author = "Pavlopoulos, John and
Kougia, Vasiliki and
Garces Arias, Esteban and
Platanou, Paraskevi and
Shabalin, Stepan and
Liagkou, Konstantina and
Papadatos, Emmanouil and
Essler, Holger and
Camps, Jean-Baptiste and
Fischer, Franz",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.1",
pages = "1--12",
abstract = "Automatic correction of errors in Handwritten Text Recognition (HTR) output poses persistent challenges yet to be fully resolved. In this study, we introduce a shared task aimed at addressing this challenge, which attracted 271 submissions, yielding only a handful of promising approaches. This paper presents the datasets, the most effective methods, and an experimental analysis in error-correcting HTRed manuscripts and papyri in Byzantine Greek, the language that followed Classical and preceded Modern Greek. By using recognised and transcribed data from seven centuries, the two best-performing methods are compared, one based on a neural encoder-decoder architecture and the other based on engineered linguistic rules. We show that the recognition error rate can be reduced by both, up to 2.5 points at the level of characters and up to 15 at the level of words, while also elucidating their respective strengths and weaknesses.",
}
|
Automatic correction of errors in Handwritten Text Recognition (HTR) output poses persistent challenges yet to be fully resolved. In this study, we introduce a shared task aimed at addressing this challenge, which attracted 271 submissions, yielding only a handful of promising approaches. This paper presents the datasets, the most effective methods, and an experimental analysis in error-correcting HTRed manuscripts and papyri in Byzantine Greek, the language that followed Classical and preceded Modern Greek. By using recognised and transcribed data from seven centuries, the two best-performing methods are compared, one based on a neural encoder-decoder architecture and the other based on engineered linguistic rules. We show that the recognition error rate can be reduced by both, up to 2.5 points at the level of characters and up to 15 at the level of words, while also elucidating their respective strengths and weaknesses.
|
[
"Pavlopoulos, John",
"Kougia, Vasiliki",
"Garces Arias, Esteban",
"Platanou, Paraskevi",
"Shabalin, Stepan",
"Liagkou, Konstantina",
"Papadatos, Emmanouil",
"Essler, Holger",
"Camps, Jean-Baptiste",
"Fischer, Franz"
] |
Challenging Error Correction in Recognised Byzantine {G}reek
|
ml4al-1.1
|
Poster
|
1912.12716v2
|
https://aclanthology.org/2024.ml4al-1.2.bib
|
@inproceedings{shmidman-etal-2024-msbert,
title = "{M}s{BERT}: A New Model for the Reconstruction of Lacunae in {H}ebrew Manuscripts",
author = "Shmidman, Avi and
Shmidman, Ometz and
Gershuni, Hillel and
Koppel, Moshe",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.2",
pages = "13--18",
abstract = "Hebrew manuscripts preserve thousands of textual transmissions of post-Biblical Hebrew texts from the first millennium. In many cases, the text in the manuscripts is not fully decipherable, whether due to deterioration, perforation, burns, or otherwise. Existing BERT models for Hebrew struggle to fill these gaps, due to the many orthographical deviations found in Hebrew manuscripts. We have pretrained a new dedicated BERT model, dubbed MsBERT (short for: Manuscript BERT), designed from the ground up to handle Hebrew manuscript text. MsBERT substantially outperforms all existing Hebrew BERT models regarding the prediction of missing words in fragmentary Hebrew manuscript transcriptions in multiple genres, as well as regarding the task of differentiating between quoted passages and exegetical elaborations. We provide MsBERT for free download and unrestricted use, and we also provide an interactive and user-friendly website to allow manuscripts scholars to leverage the power of MsBERT in their scholarly work of reconstructing fragmentary Hebrew manuscripts.",
}
|
Hebrew manuscripts preserve thousands of textual transmissions of post-Biblical Hebrew texts from the first millennium. In many cases, the text in the manuscripts is not fully decipherable, whether due to deterioration, perforation, burns, or otherwise. Existing BERT models for Hebrew struggle to fill these gaps, due to the many orthographical deviations found in Hebrew manuscripts. We have pretrained a new dedicated BERT model, dubbed MsBERT (short for: Manuscript BERT), designed from the ground up to handle Hebrew manuscript text. MsBERT substantially outperforms all existing Hebrew BERT models regarding the prediction of missing words in fragmentary Hebrew manuscript transcriptions in multiple genres, as well as regarding the task of differentiating between quoted passages and exegetical elaborations. We provide MsBERT for free download and unrestricted use, and we also provide an interactive and user-friendly website to allow manuscripts scholars to leverage the power of MsBERT in their scholarly work of reconstructing fragmentary Hebrew manuscripts.
|
[
"Shmidman, Avi",
"Shmidman, Ometz",
"Gershuni, Hillel",
"Koppel, Moshe"
] |
{M}s{BERT}: A New Model for the Reconstruction of Lacunae in {H}ebrew Manuscripts
|
ml4al-1.2
|
Poster
|
2407.12247v1
|
https://aclanthology.org/2024.ml4al-1.3.bib
|
@inproceedings{gamba-2024-predicate,
title = "Predicate Sense Disambiguation for {UMR} Annotation of {L}atin: Challenges and Insights",
author = "Gamba, Federica",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.3",
pages = "19--29",
abstract = "This paper explores the possibility to exploit different Pretrained Language Models (PLMs) to assist in a manual annotation task consisting in assigning the appropriate sense to verbal predicates in a Latin text. Indeed, this represents a crucial step when annotating data according to the Uniform Meaning Representation (UMR) framework, designed to annotate the semantic content of a text in a cross-linguistic perspective. We approach the study as a Word Sense Disambiguation task, with the primary goal of assessing the feasibility of leveraging available resources for Latin to streamline the labor-intensive annotation process. Our methodology revolves around the exploitation of contextual embeddings to compute token similarity, under the assumption that predicates sharing a similar sense would also share their context of occurrence. We discuss our findings, emphasizing applicability and limitations of this approach in the context of Latin, for which the limited amount of available resources poses additional challenges.",
}
|
This paper explores the possibility to exploit different Pretrained Language Models (PLMs) to assist in a manual annotation task consisting in assigning the appropriate sense to verbal predicates in a Latin text. Indeed, this represents a crucial step when annotating data according to the Uniform Meaning Representation (UMR) framework, designed to annotate the semantic content of a text in a cross-linguistic perspective. We approach the study as a Word Sense Disambiguation task, with the primary goal of assessing the feasibility of leveraging available resources for Latin to streamline the labor-intensive annotation process. Our methodology revolves around the exploitation of contextual embeddings to compute token similarity, under the assumption that predicates sharing a similar sense would also share their context of occurrence. We discuss our findings, emphasizing applicability and limitations of this approach in the context of Latin, for which the limited amount of available resources poses additional challenges.
|
[
"Gamba, Federica"
] |
Predicate Sense Disambiguation for {UMR} Annotation of {L}atin: Challenges and Insights
|
ml4al-1.3
|
Poster
|
2109.03322v2
|
https://aclanthology.org/2024.ml4al-1.4.bib
|
@inproceedings{chen-etal-2024-classification,
title = "Classification of Paleographic Artifacts at Scale: Mitigating Confounds and Distribution Shift in Cuneiform Tablet Dating",
author = "Chen, Danlu and
Tian, Jiahe and
Weng, Yufei and
Berg-Kirkpatrick, Taylor and
Myerston, Jacobo",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.4",
pages = "30--41",
abstract = "Cuneiform is the oldest writing system used for more than 3,000 years in ancient Mesopotamia. Cuneiform is written on clay tablets, which are hard to date because they often lack explicit references to time periods and their paleographic traits are not always reliable as a dating criterion. In this paper, we systematically analyse cuneiform dating problems using machine learning. We build baseline models for both visual and textual features and identify two major issues: confounds and distribution shift. We apply adversarial regularization and deep domain adaptation to mitigate these issues. On tablets from the same museum collections represented in the training set, we achieve accuracies as high as 84.42{\%}. However, when test tablets are taken from held-out collections, models generalize more poorly. This is only partially mitigated by robust learning techniques, highlighting important challenges for future work.",
}
|
Cuneiform is the oldest writing system used for more than 3,000 years in ancient Mesopotamia. Cuneiform is written on clay tablets, which are hard to date because they often lack explicit references to time periods and their paleographic traits are not always reliable as a dating criterion. In this paper, we systematically analyse cuneiform dating problems using machine learning. We build baseline models for both visual and textual features and identify two major issues: confounds and distribution shift. We apply adversarial regularization and deep domain adaptation to mitigate these issues. On tablets from the same museum collections represented in the training set, we achieve accuracies as high as 84.42{\%}. However, when test tablets are taken from held-out collections, models generalize more poorly. This is only partially mitigated by robust learning techniques, highlighting important challenges for future work.
|
[
"Chen, Danlu",
"Tian, Jiahe",
"Weng, Yufei",
"Berg-Kirkpatrick, Taylor",
"Myerston, Jacobo"
] |
Classification of Paleographic Artifacts at Scale: Mitigating Confounds and Distribution Shift in Cuneiform Tablet Dating
|
ml4al-1.4
|
Poster
|
2406.04039v1
|
https://aclanthology.org/2024.ml4al-1.5.bib
|
@inproceedings{nikolaev-etal-2024-classifier,
title = "Classifier identification in {A}ncient {E}gyptian as a low-resource sequence-labelling task",
author = "Nikolaev, Dmitry and
Grotenhuis, Jorke and
Harel, Haleli and
Goldwasser, Orly",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.5",
pages = "42--47",
abstract = "The complex Ancient Egyptian (AE) writing system was characterised by widespread use of graphemic classifiers (determinatives): silent (unpronounced) hieroglyphic signs clarifying the meaning or indicating the pronunciation of the host word. The study of classifiers has intensified in recent years with the launch and quick growth of the iClassifier project, a web-based platform for annotation and analysis of classifiers in ancient and modern languages. Thanks to the data contributed by the project participants, it is now possible to formulate the identification of classifiers in AE texts as an NLP task. In this paper, we make first steps towards solving this task by implementing a series of sequence-labelling neural models, which achieve promising performance despite the modest amount of training data. We discuss tokenisation and operationalisation issues arising from tackling AE texts and contrast our approach with frequency-based baselines.",
}
|
The complex Ancient Egyptian (AE) writing system was characterised by widespread use of graphemic classifiers (determinatives): silent (unpronounced) hieroglyphic signs clarifying the meaning or indicating the pronunciation of the host word. The study of classifiers has intensified in recent years with the launch and quick growth of the iClassifier project, a web-based platform for annotation and analysis of classifiers in ancient and modern languages. Thanks to the data contributed by the project participants, it is now possible to formulate the identification of classifiers in AE texts as an NLP task. In this paper, we make first steps towards solving this task by implementing a series of sequence-labelling neural models, which achieve promising performance despite the modest amount of training data. We discuss tokenisation and operationalisation issues arising from tackling AE texts and contrast our approach with frequency-based baselines.
|
[
"Nikolaev, Dmitry",
"Grotenhuis, Jorke",
"Harel, Haleli",
"Goldwasser, Orly"
] |
Classifier identification in {A}ncient {E}gyptian as a low-resource sequence-labelling task
|
ml4al-1.5
|
Poster
|
2407.00475v1
|
https://aclanthology.org/2024.ml4al-1.6.bib
|
@inproceedings{ozaki-etal-2024-long,
title = "Long Unit Word Tokenization and Bunsetsu Segmentation of Historical {J}apanese",
author = "Ozaki, Hiroaki and
Komiya, Kanako and
Asahara, Masayuki and
Ogiso, Toshinobu",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.6",
pages = "48--55",
abstract = "In Japanese, the natural minimal phrase of a sentence is the {``}bunsetsu{''} and it serves as a natural boundary of a sentence for native speakers rather than words, and thus grammatical analysis in Japanese linguistics commonly operates on the basis of bunsetsu units.In contrast, because Japanese does not have delimiters between words, there are two major categories of word definition, namely, Short Unit Words (SUWs) and Long Unit Words (LUWs).Though a SUW dictionary is available, LUW is not.Hence, this study focuses on providing deep learning-based (or LLM-based) bunsetsu and Long Unit Words analyzer for the Heian period (AD 794-1185) and evaluating its performances.We model the parser as transformer-based joint sequential labels model, which combine bunsetsu BI tag, LUW BI tag, and LUW Part-of-Speech (POS) tag for each SUW token.We train our models on corpora of each period including contemporary and historical Japanese.The results range from 0.976 to 0.996 in f1 value for both bunsetsu and LUW reconstruction indicating that our models achieve comparable performance with models for a contemporary Japanese corpus.Through the statistical analysis and diachronic case study, the estimation of bunsetsu could be influenced by the grammaticalization of morphemes.",
}
|
In Japanese, the natural minimal phrase of a sentence is the {``}bunsetsu{''} and it serves as a natural boundary of a sentence for native speakers rather than words, and thus grammatical analysis in Japanese linguistics commonly operates on the basis of bunsetsu units.In contrast, because Japanese does not have delimiters between words, there are two major categories of word definition, namely, Short Unit Words (SUWs) and Long Unit Words (LUWs).Though a SUW dictionary is available, LUW is not.Hence, this study focuses on providing deep learning-based (or LLM-based) bunsetsu and Long Unit Words analyzer for the Heian period (AD 794-1185) and evaluating its performances.We model the parser as transformer-based joint sequential labels model, which combine bunsetsu BI tag, LUW BI tag, and LUW Part-of-Speech (POS) tag for each SUW token.We train our models on corpora of each period including contemporary and historical Japanese.The results range from 0.976 to 0.996 in f1 value for both bunsetsu and LUW reconstruction indicating that our models achieve comparable performance with models for a contemporary Japanese corpus.Through the statistical analysis and diachronic case study, the estimation of bunsetsu could be influenced by the grammaticalization of morphemes.
|
[
"Ozaki, Hiroaki",
"Komiya, Kanako",
"Asahara, Masayuki",
"Ogiso, Toshinobu"
] |
Long Unit Word Tokenization and Bunsetsu Segmentation of Historical {J}apanese
|
ml4al-1.6
|
Poster
|
1906.09719v1
|
https://aclanthology.org/2024.ml4al-1.7.bib
|
@inproceedings{fitzgerald-barney-2024-new,
title = "A new machine-actionable corpus for ancient text restoration",
author = "Fitzgerald, Will and
Barney, Justin",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.7",
pages = "56--60",
abstract = "The Machine-Actionable Ancient Text (MAAT) Corpus is a new resource providing training and evaluation data for restoring lacunae in ancient Greek, Latin, and Coptic texts. Current text restoration systems require large amounts of data for training and task-relevant means for evaluation. The MAAT Corpus addresses this need by converting texts available in EpiDoc XML format into a machine-actionable format that preserves the most textually salient aspects needed for machine learning: the text itself, lacunae, and textual restorations. Structured test cases are generated from the corpus that align with the actual text restoration task performed by papyrologists and epigraphist, enabling more realistic evaluation than the synthetic tasks used previously. The initial 1.0 beta release contains approximately 134,000 text editions, 178,000 text blocks, and 750,000 individual restorations, with Greek and Latin predominating. This corpus aims to facilitate the development of computational methods to assist scholars in accurately restoring ancient texts.",
}
|
The Machine-Actionable Ancient Text (MAAT) Corpus is a new resource providing training and evaluation data for restoring lacunae in ancient Greek, Latin, and Coptic texts. Current text restoration systems require large amounts of data for training and task-relevant means for evaluation. The MAAT Corpus addresses this need by converting texts available in EpiDoc XML format into a machine-actionable format that preserves the most textually salient aspects needed for machine learning: the text itself, lacunae, and textual restorations. Structured test cases are generated from the corpus that align with the actual text restoration task performed by papyrologists and epigraphist, enabling more realistic evaluation than the synthetic tasks used previously. The initial 1.0 beta release contains approximately 134,000 text editions, 178,000 text blocks, and 750,000 individual restorations, with Greek and Latin predominating. This corpus aims to facilitate the development of computational methods to assist scholars in accurately restoring ancient texts.
|
[
"Fitzgerald, Will",
"Barney, Justin"
] |
A new machine-actionable corpus for ancient text restoration
|
ml4al-1.7
|
Poster
|
1910.06262v1
|
https://aclanthology.org/2024.ml4al-1.8.bib
|
@inproceedings{levine-etal-2024-lacuna,
title = "Lacuna Language Learning: Leveraging {RNN}s for Ranked Text Completion in Digitized {C}optic Manuscripts",
author = "Levine, Lauren and
Li, Cindy and
BremerMcCollum, Lydia and
Wagner, Nicholas and
Zeldes, Amir",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.8",
pages = "61--70",
abstract = "Ancient manuscripts are frequently damaged, containing gaps in the text known as lacunae. In this paper, we present a bidirectional RNN model for character prediction of Coptic characters in manuscript lacunae. Our best model performs with 72{\%} accuracy on single character reconstruction, but falls to 37{\%} when reconstructing lacunae of various lengths. While not suitable for definitive manuscript reconstruction, we argue that our RNN model can help scholars rank the likelihood of textual reconstructions. As evidence, we use our RNN model to rank reconstructions in two early Coptic manuscripts. Our investigation shows that neural models can augment traditional methods of textual restoration, providing scholars with an additional tool to assess lacunae in Coptic manuscripts.",
}
|
Ancient manuscripts are frequently damaged, containing gaps in the text known as lacunae. In this paper, we present a bidirectional RNN model for character prediction of Coptic characters in manuscript lacunae. Our best model performs with 72{\%} accuracy on single character reconstruction, but falls to 37{\%} when reconstructing lacunae of various lengths. While not suitable for definitive manuscript reconstruction, we argue that our RNN model can help scholars rank the likelihood of textual reconstructions. As evidence, we use our RNN model to rank reconstructions in two early Coptic manuscripts. Our investigation shows that neural models can augment traditional methods of textual restoration, providing scholars with an additional tool to assess lacunae in Coptic manuscripts.
|
[
"Levine, Lauren",
"Li, Cindy",
"BremerMcCollum, Lydia",
"Wagner, Nicholas",
"Zeldes, Amir"
] |
Lacuna Language Learning: Leveraging {RNN}s for Ranked Text Completion in Digitized {C}optic Manuscripts
|
ml4al-1.8
|
Poster
|
2407.12247v1
|
https://aclanthology.org/2024.ml4al-1.9.bib
|
@inproceedings{cao-etal-2024-deep,
title = "Deep Learning Meets Egyptology: a Hieroglyphic Transformer for Translating {A}ncient {E}gyptian",
author = "Cao, Mattia and
De Cao, Nicola and
Colonna, Angelo and
Lenci, Alessandro",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.9",
pages = "71--86",
abstract = "This work explores the potential of Transformer models focusing on the translation of ancient Egyptian hieroglyphs. We present a novel Hieroglyphic Transformer model, built upon the powerful M2M-100 multilingual translation framework and trained on a dataset we customised from the Thesaurus Linguae Aegyptiae database. Our experiments demonstrate promising results, with the model achieving significant accuracy in translating hieroglyphics into both German and English. This work holds significant implications for Egyptology, potentially accelerating the translation process and unlocking new research approaches.",
}
|
This work explores the potential of Transformer models focusing on the translation of ancient Egyptian hieroglyphs. We present a novel Hieroglyphic Transformer model, built upon the powerful M2M-100 multilingual translation framework and trained on a dataset we customised from the Thesaurus Linguae Aegyptiae database. Our experiments demonstrate promising results, with the model achieving significant accuracy in translating hieroglyphics into both German and English. This work holds significant implications for Egyptology, potentially accelerating the translation process and unlocking new research approaches.
|
[
"Cao, Mattia",
"De Cao, Nicola",
"Colonna, Angelo",
"Lenci, Aless",
"ro"
] |
Deep Learning Meets Egyptology: a Hieroglyphic Transformer for Translating {A}ncient {E}gyptian
|
ml4al-1.9
|
Poster
|
2407.00475v1
|
https://aclanthology.org/2024.ml4al-1.10.bib
|
@inproceedings{sahala-lincke-2024-neural,
title = "Neural Lemmatization and {POS}-tagging models for {C}optic, Demotic and Earlier {E}gyptian",
author = "Sahala, Aleksi and
Lincke, Eliese-Sophia",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.10",
pages = "87--97",
abstract = "We present models for lemmatizing and POS-tagging Earlier Egyptian, Coptic and Demotic to test the performance of our pipeline for the ancient languages of Egypt. Of these languages, Demotic and Egyptian are known to be difficult to annotate due to their high extent of ambiguity. We report lemmatization accuracy of 86{\%}, 91{\%} and 99{\%}, and XPOS-tagging accuracy of 89{\%}, 95{\%} and 98{\%} for Earlier Egyptian, Demotic and Coptic, respectively.",
}
|
We present models for lemmatizing and POS-tagging Earlier Egyptian, Coptic and Demotic to test the performance of our pipeline for the ancient languages of Egypt. Of these languages, Demotic and Egyptian are known to be difficult to annotate due to their high extent of ambiguity. We report lemmatization accuracy of 86{\%}, 91{\%} and 99{\%}, and XPOS-tagging accuracy of 89{\%}, 95{\%} and 98{\%} for Earlier Egyptian, Demotic and Coptic, respectively.
|
[
"Sahala, Aleksi",
"Lincke, Eliese-Sophia"
] |
Neural Lemmatization and {POS}-tagging models for {C}optic, Demotic and Earlier {E}gyptian
|
ml4al-1.10
|
Poster
|
2407.12247v1
|
https://aclanthology.org/2024.ml4al-1.11.bib
|
@inproceedings{zhou-etal-2024-ufcnet,
title = "{UFCN}et: Unsupervised Network based on {F}ourier transform and Convolutional attention for Oracle Character Recognition",
author = "Zhou, Yanan and
Liu, Guoqi and
Yang, Yiping and
Ru, Linyuan and
Liu, Dong and
Li, Xueshan",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.11",
pages = "98--106",
abstract = "Oracle bone script (OBS) is the earliest writing system in China, which is of great value in the improvement of archaeology and Chinese cultural history. However, there are some problems such as the lack of labels and the difficulty to distinguish the glyphs from the background of OBS, which makes the automatic recognition of OBS in the real world not achieve the satisfactory effect. In this paper, we propose a character recognition method based on an unsupervised domain adaptive network (UFCNet). Firstly, a convolutional attention fusion module (CAFM) is designed in the encoder to obtain more global features through multi-layer feature fusion. Second, we construct a Fourier transform (FT) module that focuses on the differences between glyphs and backgrounds. Finally, to further improve the network{'}s ability to recognize character edges, we introduce a kernel norm-constrained loss function. Extensive experiments perform on the Oracle-241 dataset show that the proposed method is superior to other adaptive methods. The code will be available at https://github.com/zhouynan/UFCNet.",
}
|
Oracle bone script (OBS) is the earliest writing system in China, which is of great value in the improvement of archaeology and Chinese cultural history. However, there are some problems such as the lack of labels and the difficulty to distinguish the glyphs from the background of OBS, which makes the automatic recognition of OBS in the real world not achieve the satisfactory effect. In this paper, we propose a character recognition method based on an unsupervised domain adaptive network (UFCNet). Firstly, a convolutional attention fusion module (CAFM) is designed in the encoder to obtain more global features through multi-layer feature fusion. Second, we construct a Fourier transform (FT) module that focuses on the differences between glyphs and backgrounds. Finally, to further improve the network{'}s ability to recognize character edges, we introduce a kernel norm-constrained loss function. Extensive experiments perform on the Oracle-241 dataset show that the proposed method is superior to other adaptive methods. The code will be available at https://github.com/zhouynan/UFCNet.
|
[
"Zhou, Yanan",
"Liu, Guoqi",
"Yang, Yiping",
"Ru, Linyuan",
"Liu, Dong",
"Li, Xueshan"
] |
{UFCN}et: Unsupervised Network based on {F}ourier transform and Convolutional attention for Oracle Character Recognition
|
ml4al-1.11
|
Poster
|
2405.15932v1
|
https://aclanthology.org/2024.ml4al-1.12.bib
|
@inproceedings{wang-etal-2024-coarse,
title = "Coarse-to-Fine Generative Model for Oracle Bone Inscriptions Inpainting",
author = "Wang, Shibin and
Guo, Wenjie and
Xu, Yubo and
Liu, Dong and
Li, Xueshan",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.12",
pages = "107--114",
abstract = "Due to ancient origin, there are many incomplete characters in the unearthed Oracle Bone Inscriptions(OBI), which brings the great challenges to recognition and research. In recent years, image inpainting techniques have made remarkable progress. However, these models are unable to adapt to the unique font shape and complex text background of OBI. To meet these aforementioned challenges, we propose a two-stage method for restoring damaged OBI using Generative Adversarial Networks (GAN), which incorporates a dual discriminator structure to capture both global and local image information. In order to accurately restore the image structure and details, the spatial attention mechanism and a novel loss function are proposed. By feeding clear copies of existing OBI and various types of masks into the network, it learns to generate content for the missing regions. Experimental results demonstrate the effectiveness of our proposed method in completing OBI compared to several state-of-the-art techniques.",
}
|
Due to ancient origin, there are many incomplete characters in the unearthed Oracle Bone Inscriptions(OBI), which brings the great challenges to recognition and research. In recent years, image inpainting techniques have made remarkable progress. However, these models are unable to adapt to the unique font shape and complex text background of OBI. To meet these aforementioned challenges, we propose a two-stage method for restoring damaged OBI using Generative Adversarial Networks (GAN), which incorporates a dual discriminator structure to capture both global and local image information. In order to accurately restore the image structure and details, the spatial attention mechanism and a novel loss function are proposed. By feeding clear copies of existing OBI and various types of masks into the network, it learns to generate content for the missing regions. Experimental results demonstrate the effectiveness of our proposed method in completing OBI compared to several state-of-the-art techniques.
|
[
"Wang, Shibin",
"Guo, Wenjie",
"Xu, Yubo",
"Liu, Dong",
"Li, Xueshan"
] |
Coarse-to-Fine Generative Model for Oracle Bone Inscriptions Inpainting
|
ml4al-1.12
|
Poster
|
2406.00684v1
|
https://aclanthology.org/2024.ml4al-1.13.bib
|
@inproceedings{papavassileiou-kosmopoulos-2024-restoring,
title = "Restoring {M}ycenaean {L}inear {B} {`}A{\&}{B}{'} series tablets using supervised and transfer learning",
author = "Papavassileiou, Katerina and
Kosmopoulos, Dimitrios",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.13",
pages = "115--129",
abstract = "We investigate the problem of restoring Mycenaean linear B clay tablets, dating from about 1400 B.C. to roughly 1200 B.C., by using text infilling methods based on machine learning models. Our goals here are: first to try to improve the results of the methods used in the related literature by focusing on the characteristics of the Mycenaean Linear B writing system (series D), second to examine the same problem for the first time on series A{\&}B and finally to investigate transfer learning using series D as source and the smaller series A{\&}B as target. Our results show promising results in the supervised learning tasks, while further investigation is needed to better exploit the merits of transfer learning.",
}
|
We investigate the problem of restoring Mycenaean linear B clay tablets, dating from about 1400 B.C. to roughly 1200 B.C., by using text infilling methods based on machine learning models. Our goals here are: first to try to improve the results of the methods used in the related literature by focusing on the characteristics of the Mycenaean Linear B writing system (series D), second to examine the same problem for the first time on series A{\&}B and finally to investigate transfer learning using series D as source and the smaller series A{\&}B as target. Our results show promising results in the supervised learning tasks, while further investigation is needed to better exploit the merits of transfer learning.
|
[
"Papavassileiou, Katerina",
"Kosmopoulos, Dimitrios"
] |
Restoring {M}ycenaean {L}inear {B} {`}A{\&}{B}{'} series tablets using supervised and transfer learning
|
ml4al-1.13
|
Poster
|
2003.01912v1
|
https://aclanthology.org/2024.ml4al-1.14.bib
|
@inproceedings{gordin-etal-2024-cured,
title = "{C}u{R}e{D}: Deep Learning Optical Character Recognition for Cuneiform Text Editions and Legacy Materials",
author = "Gordin, Shai and
Alper, Morris and
Romach, Avital and
Saenz Santos, Luis and
Yochai, Naama and
Lalazar, Roey",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.14",
pages = "130--140",
abstract = "Cuneiform documents, the earliest known form of writing, are prolific textual sources of the ancient past. Experts publish editions of these texts in transliteration using specialized typesetting, but most remain inaccessible for computational analysis in traditional printed books or legacy materials. Off-the-shelf OCR systems are insufficient for digitization without adaptation. We present CuReD (Cuneiform Recognition-Documents), a deep learning-based human-in-the-loop OCR pipeline for digitizing scanned transliterations of cuneiform texts. CuReD has a character error rate of 9{\%} on clean data and 11{\%} on representative scans. We digitized a challenging sample of transliterated cuneiform documents, as well as lexical index cards from the University of Pennsylvania Museum, demonstrating the feasibility of our platform for enabling computational analysis and bolstering machine-readable cuneiform text datasets. Our result provide the first human-in-the-loop pipeline and interface for digitizing transliterated cuneiform sources and legacy materials, enabling the enrichment of digital sources of these low-resource languages.",
}
|
Cuneiform documents, the earliest known form of writing, are prolific textual sources of the ancient past. Experts publish editions of these texts in transliteration using specialized typesetting, but most remain inaccessible for computational analysis in traditional printed books or legacy materials. Off-the-shelf OCR systems are insufficient for digitization without adaptation. We present CuReD (Cuneiform Recognition-Documents), a deep learning-based human-in-the-loop OCR pipeline for digitizing scanned transliterations of cuneiform texts. CuReD has a character error rate of 9{\%} on clean data and 11{\%} on representative scans. We digitized a challenging sample of transliterated cuneiform documents, as well as lexical index cards from the University of Pennsylvania Museum, demonstrating the feasibility of our platform for enabling computational analysis and bolstering machine-readable cuneiform text datasets. Our result provide the first human-in-the-loop pipeline and interface for digitizing transliterated cuneiform sources and legacy materials, enabling the enrichment of digital sources of these low-resource languages.
|
[
"Gordin, Shai",
"Alper, Morris",
"Romach, Avital",
"Saenz Santos, Luis",
"Yochai, Naama",
"Lalazar, Roey"
] |
{C}u{R}e{D}: Deep Learning Optical Character Recognition for Cuneiform Text Editions and Legacy Materials
|
ml4al-1.14
|
Poster
|
2009.10794v1
|
https://aclanthology.org/2024.ml4al-1.15.bib
|
@inproceedings{kessler-2024-towards,
title = "Towards Context-aware Normalization of Variant Characters in Classical {C}hinese Using Parallel Editions and {BERT}",
author = "Kessler, Florian",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.15",
pages = "141--151",
abstract = "For the automatic processing of Classical Chinese texts it is highly desirable to normalize variant characters, i.e. characters with different visual forms that are being used to represent the same morpheme, into a single form. However, there are some variant characters that are used interchangeably by some writers but deliberately employed to distinguish between different meanings by others. Hence, in order to avoid losing information in the normalization processes by conflating meaningful distinctions between variants, an intelligent normalization system that takes context into account is needed. Towards the goal of developing such a system, in this study, we describe how a dataset with usage samples of variant characters can be extracted from a corpus of paired editions of multiple texts. Using the dataset, we conduct two experiments, testing whether models can be trained with contextual word embeddings to predict variant characters. The results of the experiments show that while this is often possible for single texts, most conventions learned do not transfer well between documents.",
}
|
For the automatic processing of Classical Chinese texts it is highly desirable to normalize variant characters, i.e. characters with different visual forms that are being used to represent the same morpheme, into a single form. However, there are some variant characters that are used interchangeably by some writers but deliberately employed to distinguish between different meanings by others. Hence, in order to avoid losing information in the normalization processes by conflating meaningful distinctions between variants, an intelligent normalization system that takes context into account is needed. Towards the goal of developing such a system, in this study, we describe how a dataset with usage samples of variant characters can be extracted from a corpus of paired editions of multiple texts. Using the dataset, we conduct two experiments, testing whether models can be trained with contextual word embeddings to predict variant characters. The results of the experiments show that while this is often possible for single texts, most conventions learned do not transfer well between documents.
|
[
"Kessler, Florian"
] |
Towards Context-aware Normalization of Variant Characters in Classical {C}hinese Using Parallel Editions and {BERT}
|
ml4al-1.15
|
Poster
|
2207.12089v1
|
https://aclanthology.org/2024.ml4al-1.16.bib
|
@inproceedings{beersmans-etal-2024-gotta,
title = "{``}Gotta catch {`}em all!{''}: Retrieving people in {A}ncient {G}reek texts combining transformer models and domain knowledge",
author = "Beersmans, Marijke and
Keersmaekers, Alek and
Graaf, Evelien and
Van De Cruys, Tim and
Depauw, Mark and
Fantoli, Margherita",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.16",
pages = "152--164",
abstract = "In this paper, we present a study of transformer-based Named Entity Recognition (NER) as applied to Ancient Greek texts, with an emphasis on retrieving personal names. Recent research shows that, while the task remains difficult, the use of transformer models results in significant improvements. We, therefore, compare the performance of four transformer models on the task of NER for the categories of people, locations and groups, and add an out-of-domain test set to the existing datasets. Results on this set highlight the shortcomings of the models when confronted with a random sample of sentences. To be able to more straightforwardly integrate domain and linguistic knowledge to improve performance, we narrow down our approach to the category of people. The task is simplified to a binary PERS/MISC classification on the token level, starting from capitalised words. Next, we test the use of domain and linguistic knowledge to improve the results. We find that including simple gazetteer information as a binary mask has a marginally positive effect on newly annotated data and that treebanks can be used to help identify multi-word individuals if they are scarcely or inconsistently annotated in the available training data. The qualitative error analysis identifies the potential for improvement in both manual annotation and the inclusion of domain and linguistic knowledge in the transformer models.",
}
|
In this paper, we present a study of transformer-based Named Entity Recognition (NER) as applied to Ancient Greek texts, with an emphasis on retrieving personal names. Recent research shows that, while the task remains difficult, the use of transformer models results in significant improvements. We, therefore, compare the performance of four transformer models on the task of NER for the categories of people, locations and groups, and add an out-of-domain test set to the existing datasets. Results on this set highlight the shortcomings of the models when confronted with a random sample of sentences. To be able to more straightforwardly integrate domain and linguistic knowledge to improve performance, we narrow down our approach to the category of people. The task is simplified to a binary PERS/MISC classification on the token level, starting from capitalised words. Next, we test the use of domain and linguistic knowledge to improve the results. We find that including simple gazetteer information as a binary mask has a marginally positive effect on newly annotated data and that treebanks can be used to help identify multi-word individuals if they are scarcely or inconsistently annotated in the available training data. The qualitative error analysis identifies the potential for improvement in both manual annotation and the inclusion of domain and linguistic knowledge in the transformer models.
|
[
"Beersmans, Marijke",
"Keersmaekers, Alek",
"Graaf, Evelien",
"Van De Cruys, Tim",
"Depauw, Mark",
"Fantoli, Margherita"
] |
{``}Gotta catch {`}em all!{''}: Retrieving people in {A}ncient {G}reek texts combining transformer models and domain knowledge
|
ml4al-1.16
|
Poster
|
2308.13116v1
|
https://aclanthology.org/2024.ml4al-1.17.bib
|
@inproceedings{keersmaekers-mercelis-2024-adapting,
title = "Adapting transformer models to morphological tagging of two highly inflectional languages: a case study on {A}ncient {G}reek and {L}atin",
author = "Keersmaekers, Alek and
Mercelis, Wouter",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.17",
pages = "165--176",
abstract = "Natural language processing for Greek and Latin, inflectional languages with small corpora, requires special techniques. For morphological tagging, transformer models show promising potential, but the best approach to use these models is unclear. For both languages, this paper examines the impact of using morphological lexica, training different model types (a single model with a combined feature tag, multiple models for separate features, and a multi-task model for all features), and adding linguistic constraints. We find that, although simply fine-tuning transformers to predict a monolithic tag may already yield decent results, each of these adaptations can further improve tagging accuracy.",
}
|
Natural language processing for Greek and Latin, inflectional languages with small corpora, requires special techniques. For morphological tagging, transformer models show promising potential, but the best approach to use these models is unclear. For both languages, this paper examines the impact of using morphological lexica, training different model types (a single model with a combined feature tag, multiple models for separate features, and a multi-task model for all features), and adding linguistic constraints. We find that, although simply fine-tuning transformers to predict a monolithic tag may already yield decent results, each of these adaptations can further improve tagging accuracy.
|
[
"Keersmaekers, Alek",
"Mercelis, Wouter"
] |
Adapting transformer models to morphological tagging of two highly inflectional languages: a case study on {A}ncient {G}reek and {L}atin
|
ml4al-1.17
|
Poster
|
2308.13116v1
|
https://aclanthology.org/2024.ml4al-1.18.bib
|
@inproceedings{west-etal-2024-deep,
title = "A deep learning pipeline for the palaeographical dating of ancient {G}reek papyrus fragments",
author = "West, Graham and
Swindall, Matthew and
Brusuelas, James and
Wallin, John",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.18",
pages = "177--185",
abstract = "In this paper we present a deep learning pipeline for automatically dating ancient Greek papyrus fragments based solely on fragment images. The overall pipeline consists of several stages, including handwritten text recognition (HTR) to detect and classify characters, filtering and grouping of detected characters, 24 character-level date prediction models, and a fragment-level date prediction model that utilizes the per-character predictions. A new dataset (containing approximately 7,000 fragment images and 778,000 character images) was created by scraping papyrus databases, extracting fragment images with known dates, and running them through our HTR models to obtain labeled character images. Transfer learning was then used to fine-tune separate ResNets to predict dates for individual characters which are then used, in aggregate, to train the fragment-level date prediction model. Experiments show that even though the average accuracies of character-level dating models is low, between 35{\%}-45{\%}, the fragment-level model can achieve up to 79{\%} accuracy in predicting a broad, two-century date range for fragments with many characters. We then discuss the limitations of this approach and outline future work to improve temporal resolution and further testing on additional papyri. This image-based deep learning approach has great potential to assist scholars in the palaeographical analysis and dating of ancient Greek manuscripts.",
}
|
In this paper we present a deep learning pipeline for automatically dating ancient Greek papyrus fragments based solely on fragment images. The overall pipeline consists of several stages, including handwritten text recognition (HTR) to detect and classify characters, filtering and grouping of detected characters, 24 character-level date prediction models, and a fragment-level date prediction model that utilizes the per-character predictions. A new dataset (containing approximately 7,000 fragment images and 778,000 character images) was created by scraping papyrus databases, extracting fragment images with known dates, and running them through our HTR models to obtain labeled character images. Transfer learning was then used to fine-tune separate ResNets to predict dates for individual characters which are then used, in aggregate, to train the fragment-level date prediction model. Experiments show that even though the average accuracies of character-level dating models is low, between 35{\%}-45{\%}, the fragment-level model can achieve up to 79{\%} accuracy in predicting a broad, two-century date range for fragments with many characters. We then discuss the limitations of this approach and outline future work to improve temporal resolution and further testing on additional papyri. This image-based deep learning approach has great potential to assist scholars in the palaeographical analysis and dating of ancient Greek manuscripts.
|
[
"West, Graham",
"Swindall, Matthew",
"Brusuelas, James",
"Wallin, John"
] |
A deep learning pipeline for the palaeographical dating of ancient {G}reek papyrus fragments
|
ml4al-1.18
|
Poster
|
2407.12013v1
|
https://aclanthology.org/2024.ml4al-1.19.bib
|
@inproceedings{jiang-anderson-2024-ud,
title = "{UD}-{ETCSUX}: Toward a Better Understanding of {S}umerian Syntax",
author = "Jiang, Kenan and
Anderson, Adam",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.19",
pages = "186--191",
abstract = "Beginning with the discovery of the cuneiform writing system in 1835, there have been numerous grammars published illustrating the complexities of the Sumerian language. However, the one thing they have in common is their omission of dependency rules for syntax in Sumerian linguistics. For this reason we are working toward a better understanding of Sumerian syntax, by means of dependency-grammar in the Universal Dependencies (UD) framework. Therefore, in this study we articulate the methods and engineering techniques that can address the hardships in annotating dependency relationships in the Sumerian texts in transliteration from the Electronic Text Corpora of Sumerian (ETCSUX). Our code can be found at https://github.com/ancient-world-citation-analysis/UD-ETCSUX.",
}
|
Beginning with the discovery of the cuneiform writing system in 1835, there have been numerous grammars published illustrating the complexities of the Sumerian language. However, the one thing they have in common is their omission of dependency rules for syntax in Sumerian linguistics. For this reason we are working toward a better understanding of Sumerian syntax, by means of dependency-grammar in the Universal Dependencies (UD) framework. Therefore, in this study we articulate the methods and engineering techniques that can address the hardships in annotating dependency relationships in the Sumerian texts in transliteration from the Electronic Text Corpora of Sumerian (ETCSUX). Our code can be found at https://github.com/ancient-world-citation-analysis/UD-ETCSUX.
|
[
"Jiang, Kenan",
"Anderson, Adam"
] |
{UD}-{ETCSUX}: Toward a Better Understanding of {S}umerian Syntax
|
ml4al-1.19
|
Poster
|
2207.12102v4
|
https://aclanthology.org/2024.ml4al-1.20.bib
|
@inproceedings{simmons-etal-2024-sumtablets,
title = "{S}um{T}ablets: A Transliteration Dataset of {S}umerian Tablets",
author = "Simmons, Cole and
Diehl Martinez, Richard and
Jurafsky, Dan",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.20",
pages = "192--202",
abstract = "Transliterating Sumerian is a key step in understanding Sumerian texts, but remains a difficult and time-consuming task. With more than 100,000 known texts and comparatively few specialists, manually maintaining up-to-date transliterations for the entire corpus is impractical. While many transliterations have been published online thanks to the dedicated effort of previous projects, the lack of a comprehensive, easily accessible dataset that pairs digital representations of source glyphs with their transliterations has hindered the application of natural language processing (NLP) methods to this task.To address this gap, we present SumTablets, the largest collection of Sumerian cuneiform tablets structured as Unicode glyph{--}transliteration pairs. Our dataset comprises 91,606 tablets (totaling 6,970,407 glyphs) with associated period and genre metadata. We release SumTablets as a Hugging Face Dataset.To construct SumTablets, we first preprocess and standardize publicly available transliterations. We then map them back to a Unicode representation of their source glyphs, retaining parallel structural information (e.g., surfaces, newlines, broken segments) through the use of special tokens.We leverage SumTablets to implement and evaluate two transliteration approaches: 1) weighted sampling from a glyph{'}s possible readings, 2) fine-tuning an autoregressive language model. Our fine-tuned language model achieves an average transliteration character-level F-score (chrF) of 97.55, demonstrating the potential use of deep learning methods in Assyriological research.",
}
|
Transliterating Sumerian is a key step in understanding Sumerian texts, but remains a difficult and time-consuming task. With more than 100,000 known texts and comparatively few specialists, manually maintaining up-to-date transliterations for the entire corpus is impractical. While many transliterations have been published online thanks to the dedicated effort of previous projects, the lack of a comprehensive, easily accessible dataset that pairs digital representations of source glyphs with their transliterations has hindered the application of natural language processing (NLP) methods to this task.To address this gap, we present SumTablets, the largest collection of Sumerian cuneiform tablets structured as Unicode glyph{--}transliteration pairs. Our dataset comprises 91,606 tablets (totaling 6,970,407 glyphs) with associated period and genre metadata. We release SumTablets as a Hugging Face Dataset.To construct SumTablets, we first preprocess and standardize publicly available transliterations. We then map them back to a Unicode representation of their source glyphs, retaining parallel structural information (e.g., surfaces, newlines, broken segments) through the use of special tokens.We leverage SumTablets to implement and evaluate two transliteration approaches: 1) weighted sampling from a glyph{'}s possible readings, 2) fine-tuning an autoregressive language model. Our fine-tuned language model achieves an average transliteration character-level F-score (chrF) of 97.55, demonstrating the potential use of deep learning methods in Assyriological research.
|
[
"Simmons, Cole",
"Diehl Martinez, Richard",
"Jurafsky, Dan"
] |
{S}um{T}ablets: A Transliteration Dataset of {S}umerian Tablets
|
ml4al-1.20
|
Poster
|
2306.01268v1
|
https://aclanthology.org/2024.ml4al-1.21.bib
|
@inproceedings{hudspeth-etal-2024-latin,
title = "{L}atin Treebanks in Review: An Evaluation of Morphological Tagging Across Time",
author = "Hudspeth, Marisa and
O{'}Connor, Brendan and
Thompson, Laure",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.21",
pages = "203--218",
abstract = "Existing Latin treebanks draw from Latin{'}s long written tradition, spanning 17 centuries and a variety of cultures. Recent efforts have begun to harmonize these treebanks{'} annotations to better train and evaluate morphological taggers. However, the heterogeneity of these treebanks must be carefully considered to build effective and reliable data. In this work, we review existing Latin treebanks to identify the texts they draw from, identify their overlap, and document their coverage across time and genre. We additionally design automated conversions of their morphological feature annotations into the conventions of standard Latin grammar. From this, we build new time-period data splits that draw from the existing treebanks which we use to perform a broad cross-time analysis for POS and morphological feature tagging. We find that BERT-based taggers outperform existing taggers while also being more robust to cross-domain shifts.",
}
|
Existing Latin treebanks draw from Latin{'}s long written tradition, spanning 17 centuries and a variety of cultures. Recent efforts have begun to harmonize these treebanks{'} annotations to better train and evaluate morphological taggers. However, the heterogeneity of these treebanks must be carefully considered to build effective and reliable data. In this work, we review existing Latin treebanks to identify the texts they draw from, identify their overlap, and document their coverage across time and genre. We additionally design automated conversions of their morphological feature annotations into the conventions of standard Latin grammar. From this, we build new time-period data splits that draw from the existing treebanks which we use to perform a broad cross-time analysis for POS and morphological feature tagging. We find that BERT-based taggers outperform existing taggers while also being more robust to cross-domain shifts.
|
[
"Hudspeth, Marisa",
"O{'}Connor, Brendan",
"Thompson, Laure"
] |
{L}atin Treebanks in Review: An Evaluation of Morphological Tagging Across Time
|
ml4al-1.21
|
Poster
|
2408.06675v1
|
https://aclanthology.org/2024.ml4al-1.22.bib
|
@inproceedings{tsukagoshi-ohmukai-2024-metronome,
title = "The Metronome Approach to {S}anskrit Meter: Analysis for the Rigveda",
author = "Tsukagoshi, Yuzuki and
Ohmukai, Ikki",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.22",
pages = "219--223",
abstract = "This study analyzes the verses of the Rigveda, the oldest Sanskrit text, from a metrical perspective. Based on metrical structures, the verses are represented by four elements: light syllables, heavy syllables, word boundaries, and line boundaries. As a result, it became evident that among verses traditionally categorized under the same metrical name, there are those forming distinct clusters. Furthermore, the study reveals commonalities in metrical structures, such as similar metrical patterns grouping together despite differences in the number of lines. Going forward, it is anticipated that this methodology will enable comparisons across multiple languages within the Indo-European language family.",
}
|
This study analyzes the verses of the Rigveda, the oldest Sanskrit text, from a metrical perspective. Based on metrical structures, the verses are represented by four elements: light syllables, heavy syllables, word boundaries, and line boundaries. As a result, it became evident that among verses traditionally categorized under the same metrical name, there are those forming distinct clusters. Furthermore, the study reveals commonalities in metrical structures, such as similar metrical patterns grouping together despite differences in the number of lines. Going forward, it is anticipated that this methodology will enable comparisons across multiple languages within the Indo-European language family.
|
[
"Tsukagoshi, Yuzuki",
"Ohmukai, Ikki"
] |
The Metronome Approach to {S}anskrit Meter: Analysis for the Rigveda
|
ml4al-1.22
|
Poster
|
2010.12937v1
|
https://aclanthology.org/2024.ml4al-1.23.bib
|
@inproceedings{mandikal-2024-ancient,
title = "Ancient Wisdom, Modern Tools: Exploring Retrieval-Augmented {LLM}s for {A}ncient {I}ndian Philosophy",
author = "Mandikal, Priyanka",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.23",
pages = "224--250",
abstract = "LLMs have revolutionized the landscape of information retrieval and knowledge dissemination. However, their application in specialized areas is often hindered by limitations such as factual inaccuracies and hallucinations, especially in long-tail knowledge distributions. In this work, we explore the potential of retrieval-augmented generation (RAG) models in performing long-form question answering (LFQA) on a specially curated niche and custom knowledge domain. We present VedantaNY-10M, a dataset curated from extensive public discourses on the ancient Indian philosophy of Advaita Vedanta. We develop and benchmark a RAG model against a standard, non-RAG LLM, focusing on transcription, retrieval, and generation performance. A human evaluation involving computational linguists and domain experts, shows that the RAG model significantly outperforms the standard model in producing factual, comprehensive responses having fewer hallucinations. In addition, we find that a keyword-based hybrid retriever that focuses on unique low-frequency words further improves results. Our study provides insights into the future development of real-world RAG models for custom and niche areas of knowledge.",
}
|
LLMs have revolutionized the landscape of information retrieval and knowledge dissemination. However, their application in specialized areas is often hindered by limitations such as factual inaccuracies and hallucinations, especially in long-tail knowledge distributions. In this work, we explore the potential of retrieval-augmented generation (RAG) models in performing long-form question answering (LFQA) on a specially curated niche and custom knowledge domain. We present VedantaNY-10M, a dataset curated from extensive public discourses on the ancient Indian philosophy of Advaita Vedanta. We develop and benchmark a RAG model against a standard, non-RAG LLM, focusing on transcription, retrieval, and generation performance. A human evaluation involving computational linguists and domain experts, shows that the RAG model significantly outperforms the standard model in producing factual, comprehensive responses having fewer hallucinations. In addition, we find that a keyword-based hybrid retriever that focuses on unique low-frequency words further improves results. Our study provides insights into the future development of real-world RAG models for custom and niche areas of knowledge.
|
[
"M",
"ikal, Priyanka"
] |
Ancient Wisdom, Modern Tools: Exploring Retrieval-Augmented {LLM}s for {A}ncient {I}ndian Philosophy
|
ml4al-1.23
|
Poster
|
1808.03738v2
|
https://aclanthology.org/2024.ml4al-1.24.bib
|
@inproceedings{chen-etal-2024-leveraging,
title = "Leveraging Part-of-Speech Tagging for Enhanced Stylometry of {L}atin Literature",
author = "Chen, Sarah and
Burns, Patrick and
Bolt, Thomas and
Chaudhuri, Pramit and
Dexter, Joseph",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.24",
pages = "251--259",
abstract = "In literary critical applications, stylometry can benefit from hand-curated feature sets capturing various syntactic and rhetorical functions. For premodern languages, calculation of such features is hampered by a lack of adequate computational resources for accurate part-of-speech tagging and semantic disambiguation. This paper reports an evaluation of POS-taggers for Latin and their use in augmenting a hand-curated stylometric feature set. Our experiments show that POS-augmented features not only provide more accurate counts than POS-blind features but also perform better on tasks such as genre classification. In the course of this work we introduce POS n-grams as a feature for Latin stylometry.",
}
|
In literary critical applications, stylometry can benefit from hand-curated feature sets capturing various syntactic and rhetorical functions. For premodern languages, calculation of such features is hampered by a lack of adequate computational resources for accurate part-of-speech tagging and semantic disambiguation. This paper reports an evaluation of POS-taggers for Latin and their use in augmenting a hand-curated stylometric feature set. Our experiments show that POS-augmented features not only provide more accurate counts than POS-blind features but also perform better on tasks such as genre classification. In the course of this work we introduce POS n-grams as a feature for Latin stylometry.
|
[
"Chen, Sarah",
"Burns, Patrick",
"Bolt, Thomas",
"Chaudhuri, Pramit",
"Dexter, Joseph"
] |
Leveraging Part-of-Speech Tagging for Enhanced Stylometry of {L}atin Literature
|
ml4al-1.24
|
Poster
|
2407.00418v1
|
https://aclanthology.org/2024.ml4al-1.25.bib
|
@inproceedings{konstantinidou-etal-2024-exploring,
title = "Exploring intertextuality across the {H}omeric poems through language models",
author = "Konstantinidou, Maria and
Pavlopoulos, John and
Barker, Elton",
editor = "Pavlopoulos, John and
Sommerschield, Thea and
Assael, Yannis and
Gordin, Shai and
Cho, Kyunghyun and
Passarotti, Marco and
Sprugnoli, Rachele and
Liu, Yudong and
Li, Bin and
Anderson, Adam",
booktitle = "Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)",
month = aug,
year = "2024",
address = "Hybrid in Bangkok, Thailand and online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.ml4al-1.25",
pages = "260--268",
abstract = "Past research has modelled statistically the language of the Homeric poems, assessing the degree of surprisal for each verse through diverse metrics and resulting to the HoLM resource. In this study we utilise the HoLM resource to explore cross poem affinity at the verse level, looking at Iliadic verses and passages that are less surprising to the Odyssean model than to the Iliadic one and vice-versa. Using the same tool, we investigate verses that evoke greater surprise when assessed by a local model trained solely on their source book, compared to a global model trained on the entire source poem. Investigating deeper on the distribution of such verses across the Homeric poems we employ machine learning text classification to further analyse quantitatively cross-poem affinity in selected books.",
}
|
Past research has modelled statistically the language of the Homeric poems, assessing the degree of surprisal for each verse through diverse metrics and resulting to the HoLM resource. In this study we utilise the HoLM resource to explore cross poem affinity at the verse level, looking at Iliadic verses and passages that are less surprising to the Odyssean model than to the Iliadic one and vice-versa. Using the same tool, we investigate verses that evoke greater surprise when assessed by a local model trained solely on their source book, compared to a global model trained on the entire source poem. Investigating deeper on the distribution of such verses across the Homeric poems we employ machine learning text classification to further analyse quantitatively cross-poem affinity in selected books.
|
[
"Konstantinidou, Maria",
"Pavlopoulos, John",
"Barker, Elton"
] |
Exploring intertextuality across the {H}omeric poems through language models
|
ml4al-1.25
|
Poster
|
2406.03843v2
|
https://aclanthology.org/2024.nlp4convai-1.1.bib
|
@inproceedings{mendonca-etal-2024-benchmarking,
title = "On the Benchmarking of {LLM}s for Open-Domain Dialogue Evaluation",
author = "Mendon{\c{c}}a, John and
Lavie, Alon and
Trancoso, Isabel",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.1",
pages = "1--12",
abstract = "Large Language Models (LLMs) have showcased remarkable capabilities in various Natural Language Processing tasks. For automatic open-domain dialogue evaluation in particular, LLMs have been seamlessly integrated into evaluation frameworks, and together with human evaluation, compose the backbone of most evaluations. However, existing evaluation benchmarks often rely on outdated datasets and evaluate aspects like Fluency and Relevance, which fail to adequately capture the capabilities and limitations of state-of-the-art chatbot models. This paper critically examines current evaluation benchmarks, highlighting that the use of older response generators and quality aspects fail to accurately reflect modern chatbot capabilities. A small annotation experiment on a recent LLM-generated dataset (SODA) reveals that LLM evaluators such as GPT-4 struggle to detect actual deficiencies in dialogues generated by current LLM chatbots.",
}
|
Large Language Models (LLMs) have showcased remarkable capabilities in various Natural Language Processing tasks. For automatic open-domain dialogue evaluation in particular, LLMs have been seamlessly integrated into evaluation frameworks, and together with human evaluation, compose the backbone of most evaluations. However, existing evaluation benchmarks often rely on outdated datasets and evaluate aspects like Fluency and Relevance, which fail to adequately capture the capabilities and limitations of state-of-the-art chatbot models. This paper critically examines current evaluation benchmarks, highlighting that the use of older response generators and quality aspects fail to accurately reflect modern chatbot capabilities. A small annotation experiment on a recent LLM-generated dataset (SODA) reveals that LLM evaluators such as GPT-4 struggle to detect actual deficiencies in dialogues generated by current LLM chatbots.
|
[
"Mendon{\\c{c}}a, John",
"Lavie, Alon",
"Trancoso, Isabel"
] |
On the Benchmarking of {LLM}s for Open-Domain Dialogue Evaluation
|
nlp4convai-1.1
|
Poster
|
2311.01677v2
|
https://aclanthology.org/2024.nlp4convai-1.2.bib
|
@inproceedings{hu-etal-2024-exploring,
title = "Exploring Description-Augmented Dataless Intent Classification",
author = "Hu, Ruoyu and
Khosmood, Foaad and
Edalat, Abbas",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.2",
pages = "13--36",
abstract = "In this work, we introduce several schemes to leverage description-augmented embedding similarity for dataless intent classification using current state-of-the-art (SOTA) text embedding models. We report results of our methods on four commonly used intent classification datasets and compare against previous works of a similar nature. Our work shows promising results for dataless classification scaling to a large number of unseen intents. We show competitive results and significant improvements (+6.12{\%} Avg.) over strong zero-shot baselines, all without training on labelled or task-specific data. Furthermore, we provide qualitative error analysis of the shortfalls of this methodology to help guide future research in this area.",
}
|
In this work, we introduce several schemes to leverage description-augmented embedding similarity for dataless intent classification using current state-of-the-art (SOTA) text embedding models. We report results of our methods on four commonly used intent classification datasets and compare against previous works of a similar nature. Our work shows promising results for dataless classification scaling to a large number of unseen intents. We show competitive results and significant improvements (+6.12{\%} Avg.) over strong zero-shot baselines, all without training on labelled or task-specific data. Furthermore, we provide qualitative error analysis of the shortfalls of this methodology to help guide future research in this area.
|
[
"Hu, Ruoyu",
"Khosmood, Foaad",
"Edalat, Abbas"
] |
Exploring Description-Augmented Dataless Intent Classification
|
nlp4convai-1.2
|
Poster
|
2407.17862v1
|
https://aclanthology.org/2024.nlp4convai-1.3.bib
|
@inproceedings{kim-etal-2024-revealing,
title = "Revealing User Familiarity Bias in Task-Oriented Dialogue via Interactive Evaluation",
author = "Kim, Takyoung and
Shin, Jamin and
Kim, Young-Ho and
Bae, Sanghwan and
Kim, Sungdong",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.3",
pages = "37--55",
abstract = "Most task-oriented dialogue (TOD) benchmarks assume users that know exactly how to use the system by constraining the user behaviors within the system{'}s capabilities via strict user goals, namely {``}user familiarity{''} bias. This data bias deepens when it combines with data-driven TOD systems, as it is impossible to fathom the effect of it with existing static evaluations. Hence, we conduct an interactive user study to unveil how vulnerable TOD systems are against realistic scenarios. In particular, we compare users with 1) detailed goal instructions that conform to the system boundaries (closed-goal) and 2) vague goal instructions that are often unsupported but realistic (open-goal). Our study reveals that conversations in open-goal settings lead to catastrophic failures of the system, in which 92{\%} of the dialogues had significant issues. Moreover, we conduct a thorough analysis to identify distinctive features between the two settings through error annotation. From this, we discover a novel {``}pretending{''} behavior, in which the system pretends to handle the user requests even though they are beyond the system{'}s capabilities. We discuss its characteristics and toxicity while showing recent large language models can also suffer from this behavior.",
}
|
Most task-oriented dialogue (TOD) benchmarks assume users that know exactly how to use the system by constraining the user behaviors within the system{'}s capabilities via strict user goals, namely {``}user familiarity{''} bias. This data bias deepens when it combines with data-driven TOD systems, as it is impossible to fathom the effect of it with existing static evaluations. Hence, we conduct an interactive user study to unveil how vulnerable TOD systems are against realistic scenarios. In particular, we compare users with 1) detailed goal instructions that conform to the system boundaries (closed-goal) and 2) vague goal instructions that are often unsupported but realistic (open-goal). Our study reveals that conversations in open-goal settings lead to catastrophic failures of the system, in which 92{\%} of the dialogues had significant issues. Moreover, we conduct a thorough analysis to identify distinctive features between the two settings through error annotation. From this, we discover a novel {``}pretending{''} behavior, in which the system pretends to handle the user requests even though they are beyond the system{'}s capabilities. We discuss its characteristics and toxicity while showing recent large language models can also suffer from this behavior.
|
[
"Kim, Takyoung",
"Shin, Jamin",
"Kim, Young-Ho",
"Bae, Sanghwan",
"Kim, Sungdong"
] |
Revealing User Familiarity Bias in Task-Oriented Dialogue via Interactive Evaluation
|
nlp4convai-1.3
|
Poster
|
2305.13857v2
|
https://aclanthology.org/2024.nlp4convai-1.4.bib
|
@inproceedings{gupta-etal-2024-evaluating,
title = "Evaluating Robustness of Open Dialogue Summarization Models in the Presence of Naturally Occurring Variations",
author = "Gupta, Ankita and
Gunasekara, Chulaka and
Wan, Hui and
Ganhotra, Jatin and
Joshi, Sachindra and
Danilevsky, Marina",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.4",
pages = "56--72",
abstract = "Dialogue summarization involves summarizing long conversations while preserving the most salient information. Real-life dialogues often involve naturally occurring variations (e.g., repetitions, hesitations). In this study, we systematically investigate the impact of such variations on state-of-the-art open dialogue summarization models whose details are publicly known (e.g., architectures, weights, and training corpora). To simulate real-life variations, we introduce two types of perturbations: utterance-level perturbations that modify individual utterances with errors and language variations, and dialogue-level perturbations that add non-informative exchanges (e.g., repetitions, greetings). We perform our analysis along three dimensions of robustness: consistency, saliency, and faithfulness, which aim to capture different aspects of performance of a summarization model. We find that both fine-tuned and instruction-tuned models are affected by input variations, with the latter being more susceptible, particularly to dialogue-level perturbations. We also validate our findings via human evaluation. Finally, we investigate whether the robustness of fine-tuned models can be improved by training them with a fraction of perturbed data. We find that this approach does not yield consistent performance gains, warranting further research. Overall, our work highlights robustness challenges in current open encoder-decoder summarization models and provides insights for future research.",
}
|
Dialogue summarization involves summarizing long conversations while preserving the most salient information. Real-life dialogues often involve naturally occurring variations (e.g., repetitions, hesitations). In this study, we systematically investigate the impact of such variations on state-of-the-art open dialogue summarization models whose details are publicly known (e.g., architectures, weights, and training corpora). To simulate real-life variations, we introduce two types of perturbations: utterance-level perturbations that modify individual utterances with errors and language variations, and dialogue-level perturbations that add non-informative exchanges (e.g., repetitions, greetings). We perform our analysis along three dimensions of robustness: consistency, saliency, and faithfulness, which aim to capture different aspects of performance of a summarization model. We find that both fine-tuned and instruction-tuned models are affected by input variations, with the latter being more susceptible, particularly to dialogue-level perturbations. We also validate our findings via human evaluation. Finally, we investigate whether the robustness of fine-tuned models can be improved by training them with a fraction of perturbed data. We find that this approach does not yield consistent performance gains, warranting further research. Overall, our work highlights robustness challenges in current open encoder-decoder summarization models and provides insights for future research.
|
[
"Gupta, Ankita",
"Gunasekara, Chulaka",
"Wan, Hui",
"Ganhotra, Jatin",
"Joshi, Sachindra",
"Danilevsky, Marina"
] |
Evaluating Robustness of Open Dialogue Summarization Models in the Presence of Naturally Occurring Variations
|
nlp4convai-1.4
|
Poster
|
2311.08705v1
|
https://aclanthology.org/2024.nlp4convai-1.5.bib
|
@inproceedings{schneider-etal-2024-engineering,
title = "Engineering Conversational Search Systems: A Review of Applications, Architectures, and Functional Components",
author = "Schneider, Phillip and
Poelman, Wessel and
Rovatsos, Michael and
Matthes, Florian",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.5",
pages = "73--88",
abstract = "Conversational search systems enable information retrieval via natural language interactions, with the goal of maximizing users{'} information gain over multiple dialogue turns. The increasing prevalence of conversational interfaces adopting this search paradigm challenges traditional information retrieval approaches, stressing the importance of better understanding the engineering process of developing these systems. We undertook a systematic literature review to investigate the links between theoretical studies and technical implementations of conversational search systems. Our review identifies real-world application scenarios, system architectures, and functional components. We consolidate our results by presenting a layered architecture framework and explaining the core functions of conversational search systems. Furthermore, we reflect on our findings in light of the rapid progress in large language models, discussing their capabilities, limitations, and directions for future research.",
}
|
Conversational search systems enable information retrieval via natural language interactions, with the goal of maximizing users{'} information gain over multiple dialogue turns. The increasing prevalence of conversational interfaces adopting this search paradigm challenges traditional information retrieval approaches, stressing the importance of better understanding the engineering process of developing these systems. We undertook a systematic literature review to investigate the links between theoretical studies and technical implementations of conversational search systems. Our review identifies real-world application scenarios, system architectures, and functional components. We consolidate our results by presenting a layered architecture framework and explaining the core functions of conversational search systems. Furthermore, we reflect on our findings in light of the rapid progress in large language models, discussing their capabilities, limitations, and directions for future research.
|
[
"Schneider, Phillip",
"Poelman, Wessel",
"Rovatsos, Michael",
"Matthes, Florian"
] |
Engineering Conversational Search Systems: A Review of Applications, Architectures, and Functional Components
|
nlp4convai-1.5
|
Poster
|
2407.00997v1
|
https://aclanthology.org/2024.nlp4convai-1.6.bib
|
@inproceedings{han-etal-2024-efficient,
title = "Efficient Dynamic Hard Negative Sampling for Dialogue Selection",
author = "Han, Janghoon and
Lee, Dongkyu and
Shin, Joongbo and
Bae, Hyunkyung and
Bang, Jeesoo and
Kim, Seonghwan and
Choi, Stanley Jungkyu and
Lee, Honglak",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.6",
pages = "89--100",
abstract = "Recent studies have demonstrated significant improvements in selection tasks, and a considerable portion of this success is attributed to incorporating informative negative samples during training. While traditional methods for constructing hard negatives provide meaningful supervision, they depend on static samples that do not evolve during training, leading to sub-optimal performance. Dynamic hard negative sampling addresses this limitation by continuously adapting to the model{'}s changing state throughout training. However, the high computational demands of this method restrict its applicability to certain model architectures. To overcome these challenges, we introduce an efficient dynamic hard negative sampling (EDHNS). EDHNS enhances efficiency by pre-filtering easily discriminable negatives, thereby reducing the number of candidates the model needs to compute during training. Additionally, it excludes question-candidate pairs where the model already exhibits high confidence from loss computations, further reducing training time. These approaches maintain learning quality while minimizing computation and streamlining the training process. Extensive experiments on DSTC9, DSTC10, Ubuntu, and E-commerce benchmarks demonstrate that EDHNS significantly outperforms baseline models, proving its effectiveness in dialogue selection tasks.",
}
|
Recent studies have demonstrated significant improvements in selection tasks, and a considerable portion of this success is attributed to incorporating informative negative samples during training. While traditional methods for constructing hard negatives provide meaningful supervision, they depend on static samples that do not evolve during training, leading to sub-optimal performance. Dynamic hard negative sampling addresses this limitation by continuously adapting to the model{'}s changing state throughout training. However, the high computational demands of this method restrict its applicability to certain model architectures. To overcome these challenges, we introduce an efficient dynamic hard negative sampling (EDHNS). EDHNS enhances efficiency by pre-filtering easily discriminable negatives, thereby reducing the number of candidates the model needs to compute during training. Additionally, it excludes question-candidate pairs where the model already exhibits high confidence from loss computations, further reducing training time. These approaches maintain learning quality while minimizing computation and streamlining the training process. Extensive experiments on DSTC9, DSTC10, Ubuntu, and E-commerce benchmarks demonstrate that EDHNS significantly outperforms baseline models, proving its effectiveness in dialogue selection tasks.
|
[
"Han, Janghoon",
"Lee, Dongkyu",
"Shin, Joongbo",
"Bae, Hyunkyung",
"Bang, Jeesoo",
"Kim, Seonghwan",
"Choi, Stanley Jungkyu",
"Lee, Honglak"
] |
Efficient Dynamic Hard Negative Sampling for Dialogue Selection
|
nlp4convai-1.6
|
Poster
|
2403.19276v1
|
https://aclanthology.org/2024.nlp4convai-1.7.bib
|
@inproceedings{yang-etal-2024-chamain,
title = "Chamain: Harmonizing Character Persona Integrity with Domain-Adaptive Knowledge in Dialogue Generation",
author = "Yang, Seung-Moo and
Lee, Jeehyun and
Cho, Won Ik",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.7",
pages = "101--113",
abstract = "Recent advances in large language models (LLMs) have shown their capacity for generating natural dialogues, leveraging extensive pre-trained knowledge. However, the seamless integration of domain-specific knowledge into dialogue agents, without undermining their personas or unique textual style, remains a challenging task. Traditional approaches, such as constructing knowledge-aware character dialogue datasets or training LLMs from the ground up, require considerable resources. Sequentially fine-tuning character chatbots across multiple datasets or applying existing merging techniques often leads to catastrophic forgetting, resulting in the loss of both knowledge and the character{'}s distinct persona. This compromises the model{'}s ability to consistently generate character-driven dialogues within a user-centric framework. In this context, we introduce a novel model merging method, Chamain, which effortlessly enhances the performance of character models, much like finding a {``}free lunch{''}. Chamain merges domain-specific knowledge into a character model by parameter-wise weight combination of instruction-tuned models and learns to reflect persona{'}s unique characteristics and style through Layer-wise merging. Our experiments demonstrate that Chamain effectively maintains style while also solving domain-specific problems to a certain extent compared to the baselines, even showing a higher style probability compared to the character model in legal QA.",
}
|
Recent advances in large language models (LLMs) have shown their capacity for generating natural dialogues, leveraging extensive pre-trained knowledge. However, the seamless integration of domain-specific knowledge into dialogue agents, without undermining their personas or unique textual style, remains a challenging task. Traditional approaches, such as constructing knowledge-aware character dialogue datasets or training LLMs from the ground up, require considerable resources. Sequentially fine-tuning character chatbots across multiple datasets or applying existing merging techniques often leads to catastrophic forgetting, resulting in the loss of both knowledge and the character{'}s distinct persona. This compromises the model{'}s ability to consistently generate character-driven dialogues within a user-centric framework. In this context, we introduce a novel model merging method, Chamain, which effortlessly enhances the performance of character models, much like finding a {``}free lunch{''}. Chamain merges domain-specific knowledge into a character model by parameter-wise weight combination of instruction-tuned models and learns to reflect persona{'}s unique characteristics and style through Layer-wise merging. Our experiments demonstrate that Chamain effectively maintains style while also solving domain-specific problems to a certain extent compared to the baselines, even showing a higher style probability compared to the character model in legal QA.
|
[
"Yang, Seung-Moo",
"Lee, Jeehyun",
"Cho, Won Ik"
] |
Chamain: Harmonizing Character Persona Integrity with Domain-Adaptive Knowledge in Dialogue Generation
|
nlp4convai-1.7
|
Poster
|
2207.13919v1
|
https://aclanthology.org/2024.nlp4convai-1.8.bib
|
@inproceedings{jandaghi-etal-2024-faithful,
title = "Faithful Persona-based Conversational Dataset Generation with Large Language Models",
author = "Jandaghi, Pegah and
Sheng, Xianghai and
Bai, Xinyi and
Pujara, Jay and
Sidahmed, Hakim",
editor = "Nouri, Elnaz and
Rastogi, Abhinav and
Spithourakis, Georgios and
Liu, Bing and
Chen, Yun-Nung and
Li, Yu and
Albalak, Alon and
Wakaki, Hiromi and
Papangelis, Alexandros",
booktitle = "Proceedings of the 6th Workshop on NLP for Conversational AI (NLP4ConvAI 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4convai-1.8",
pages = "114--139",
abstract = "High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user{'}s character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during an AI detection test decreases from 17.2{\%} to 8.8{\%} over three iterations.",
}
|
High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user{'}s character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during an AI detection test decreases from 17.2{\%} to 8.8{\%} over three iterations.
|
[
"J",
"aghi, Pegah",
"Sheng, Xianghai",
"Bai, Xinyi",
"Pujara, Jay",
"Sidahmed, Hakim"
] |
Faithful Persona-based Conversational Dataset Generation with Large Language Models
|
nlp4convai-1.8
|
Poster
|
2303.03278v1
|
https://aclanthology.org/2024.nlrse-1.1.bib
|
@inproceedings{cao-2024-graphreason,
title = "{G}raph{R}eason: Enhancing Reasoning Capabilities of Large Language Models through A Graph-Based Verification Approach",
author = "Cao, Lang",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Lipkin, Ben and
Neves Ribeiro, Danilo and
Wong, Lionel and
Ye, Xi and
Zhao, Wenting",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlrse-1.1",
pages = "1--12",
abstract = "Large Language Models (LLMs) have showcased impressive reasoning capabilities, particularly when guided by specifically designed prompts in complex reasoning tasks such as math word problems. These models typically solve tasks using a chain-of-thought approach, which not only bolsters their reasoning abilities but also provides valuable insights into their problem-solving process. However, there is still significant room for enhancing the reasoning abilities of LLMs. Some studies suggest that the integration of an LLM output verifier can boost reasoning accuracy without necessitating additional model training. In this paper, we follow these studies and introduce a novel graph-based method to further augment the reasoning capabilities of LLMs. We posit that multiple solutions to a reasoning task, generated by an LLM, can be represented as a reasoning graph due to the logical connections between intermediate steps from different reasoning paths. Therefore, we propose the Reasoning Graph Verifier (GraphReason) to analyze and verify the solutions generated by LLMs. By evaluating these graphs, models can yield more accurate and reliable results.Our experimental results show that our graph-based verification method not only significantly enhances the reasoning abilities of LLMs but also outperforms existing verifier methods in terms of improving these models{'} reasoning performance.",
}
|
Large Language Models (LLMs) have showcased impressive reasoning capabilities, particularly when guided by specifically designed prompts in complex reasoning tasks such as math word problems. These models typically solve tasks using a chain-of-thought approach, which not only bolsters their reasoning abilities but also provides valuable insights into their problem-solving process. However, there is still significant room for enhancing the reasoning abilities of LLMs. Some studies suggest that the integration of an LLM output verifier can boost reasoning accuracy without necessitating additional model training. In this paper, we follow these studies and introduce a novel graph-based method to further augment the reasoning capabilities of LLMs. We posit that multiple solutions to a reasoning task, generated by an LLM, can be represented as a reasoning graph due to the logical connections between intermediate steps from different reasoning paths. Therefore, we propose the Reasoning Graph Verifier (GraphReason) to analyze and verify the solutions generated by LLMs. By evaluating these graphs, models can yield more accurate and reliable results.Our experimental results show that our graph-based verification method not only significantly enhances the reasoning abilities of LLMs but also outperforms existing verifier methods in terms of improving these models{'} reasoning performance.
|
[
"Cao, Lang"
] |
{G}raph{R}eason: Enhancing Reasoning Capabilities of Large Language Models through A Graph-Based Verification Approach
|
nlrse-1.1
|
Poster
|
2407.18367v1
|
https://aclanthology.org/2024.nlrse-1.2.bib
|
@inproceedings{zhang-etal-2024-proc2pddl,
title = "{PROC}2{PDDL}: Open-Domain Planning Representations from Texts",
author = "Zhang, Tianyi and
Zhang, Li and
Hou, Zhaoyi and
Wang, Ziyu and
Gu, Yuling and
Clark, Peter and
Callison-Burch, Chris and
Tandon, Niket",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Lipkin, Ben and
Neves Ribeiro, Danilo and
Wong, Lionel and
Ye, Xi and
Zhao, Wenting",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlrse-1.2",
pages = "13--24",
abstract = "Planning in a text-based environment continues to be a significant challenge for AI systems. Recent approaches have utilized language models to predict planning domain definitions (e.g., PDDL) but have only been evaluated in closed-domain simulated environments. To address this, we present Proc2PDDL, the first dataset containing open-domain procedural texts paired with expert-annotated PDDL representations. Using this dataset, we evaluate the task of predicting domain actions (parameters, preconditions, and effects). We experiment with various large language models (LLMs) and prompting mechanisms, including a novel instruction inspired by the zone of proximal development (ZPD), which reconstructs the task as incremental basic skills. Our results demonstrate that Proc2PDDL is highly challenging for end-to-end LLMs, with GPT-3.5{'}s success rate close to 0{\%} and GPT-4o{'}s 38{\%}. With ZPD instructions, GPT-4o{'}s success rate increases to 45{\%}, outperforming regular chain-of-thought prompting{'}s 34{\%}. Our analysis systematically examines both syntactic and semantic errors, providing insights into the strengths and weaknesses of language models in generating domain-specific programs.",
}
|
Planning in a text-based environment continues to be a significant challenge for AI systems. Recent approaches have utilized language models to predict planning domain definitions (e.g., PDDL) but have only been evaluated in closed-domain simulated environments. To address this, we present Proc2PDDL, the first dataset containing open-domain procedural texts paired with expert-annotated PDDL representations. Using this dataset, we evaluate the task of predicting domain actions (parameters, preconditions, and effects). We experiment with various large language models (LLMs) and prompting mechanisms, including a novel instruction inspired by the zone of proximal development (ZPD), which reconstructs the task as incremental basic skills. Our results demonstrate that Proc2PDDL is highly challenging for end-to-end LLMs, with GPT-3.5{'}s success rate close to 0{\%} and GPT-4o{'}s 38{\%}. With ZPD instructions, GPT-4o{'}s success rate increases to 45{\%}, outperforming regular chain-of-thought prompting{'}s 34{\%}. Our analysis systematically examines both syntactic and semantic errors, providing insights into the strengths and weaknesses of language models in generating domain-specific programs.
|
[
"Zhang, Tianyi",
"Zhang, Li",
"Hou, Zhaoyi",
"Wang, Ziyu",
"Gu, Yuling",
"Clark, Peter",
"Callison-Burch, Chris",
"T",
"on, Niket"
] |
{PROC}2{PDDL}: Open-Domain Planning Representations from Texts
|
nlrse-1.2
|
Poster
|
1803.02208v1
|
https://aclanthology.org/2024.nlrse-1.3.bib
|
@inproceedings{deng-etal-2024-towards,
title = "Towards A Unified View of Answer Calibration for Multi-Step Reasoning",
author = "Deng, Shumin and
Zhang, Ningyu and
Oo, Nay and
Hooi, Bryan",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Lipkin, Ben and
Neves Ribeiro, Danilo and
Wong, Lionel and
Ye, Xi and
Zhao, Wenting",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlrse-1.3",
pages = "25--38",
abstract = "Large Language Models (LLMs) employing Chain-of-Thought (CoT) prompting have broadened the scope for improving multi-step reasoning capabilities. We generally divide multi-step reasoning into two phases: *path generation* to generate the reasoning path(s); and *answer calibration* post-processing the reasoning path(s) to obtain a final answer. However, the existing literature lacks systematic analysis on different answer calibration approaches. In this paper, we summarize the taxonomy of recent answer calibration techniques and break them down into step-level and path-level strategies. We then conduct a thorough evaluation on these strategies from a unified view, systematically scrutinizing step-level and path-level answer calibration across multiple paths. Experimental results reveal that integrating the dominance of both strategies tends to derive optimal outcomes. Our study holds the potential to illuminate key insights for optimizing multi-step reasoning with answer calibration.",
}
|
Large Language Models (LLMs) employing Chain-of-Thought (CoT) prompting have broadened the scope for improving multi-step reasoning capabilities. We generally divide multi-step reasoning into two phases: *path generation* to generate the reasoning path(s); and *answer calibration* post-processing the reasoning path(s) to obtain a final answer. However, the existing literature lacks systematic analysis on different answer calibration approaches. In this paper, we summarize the taxonomy of recent answer calibration techniques and break them down into step-level and path-level strategies. We then conduct a thorough evaluation on these strategies from a unified view, systematically scrutinizing step-level and path-level answer calibration across multiple paths. Experimental results reveal that integrating the dominance of both strategies tends to derive optimal outcomes. Our study holds the potential to illuminate key insights for optimizing multi-step reasoning with answer calibration.
|
[
"Deng, Shumin",
"Zhang, Ningyu",
"Oo, Nay",
"Hooi, Bryan"
] |
Towards A Unified View of Answer Calibration for Multi-Step Reasoning
|
nlrse-1.3
|
Poster
|
2311.09101v2
|
https://aclanthology.org/2024.nlrse-1.4.bib
|
@inproceedings{dutta-etal-2024-applying,
title = "Applying {RLAIF} for Code Generation with {API}-usage in Lightweight {LLM}s",
author = "Dutta, Sujan and
Mahinder, Sayantan and
Anantha, Raviteja and
Bandyopadhyay, Bortik",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Lipkin, Ben and
Neves Ribeiro, Danilo and
Wong, Lionel and
Ye, Xi and
Zhao, Wenting",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlrse-1.4",
pages = "39--45",
abstract = "Reinforcement Learning from AI Feedback (RLAIF) has demonstrated significant potential across various domains, including mitigating harm in LLM outputs, enhancing text summarization, and mathematical reasoning. This paper introduces an RLAIF framework for improving the code generation abilities of lightweight ({\textless}1B parameters) LLMs. We specifically focus on code generation tasks that require writing appropriate API calls, which is challenging due to the well-known issue of hallucination in LLMs. Our framework extracts AI feedback from a larger LLM (e.g., GPT-3.5) through a specialized prompting strategy and uses this data to train a reward model towards better alignment from smaller LLMs. We run our experiments on the Gorilla dataset and meticulously assess the quality of the model-generated code across various metrics, including AST, ROUGE, and Code-BLEU, and develop a pipeline to compute its executability rate accurately. Our approach significantly enhances the fine-tuned LLM baseline{'}s performance, achieving a 4.5{\%} improvement in executability rate. Notably, a smaller LLM model (780M parameters) trained with RLAIF surpasses a much larger fine-tuned baseline with 7B parameters, achieving a 1.0{\%} higher code executability rate.",
}
|
Reinforcement Learning from AI Feedback (RLAIF) has demonstrated significant potential across various domains, including mitigating harm in LLM outputs, enhancing text summarization, and mathematical reasoning. This paper introduces an RLAIF framework for improving the code generation abilities of lightweight ({\textless}1B parameters) LLMs. We specifically focus on code generation tasks that require writing appropriate API calls, which is challenging due to the well-known issue of hallucination in LLMs. Our framework extracts AI feedback from a larger LLM (e.g., GPT-3.5) through a specialized prompting strategy and uses this data to train a reward model towards better alignment from smaller LLMs. We run our experiments on the Gorilla dataset and meticulously assess the quality of the model-generated code across various metrics, including AST, ROUGE, and Code-BLEU, and develop a pipeline to compute its executability rate accurately. Our approach significantly enhances the fine-tuned LLM baseline{'}s performance, achieving a 4.5{\%} improvement in executability rate. Notably, a smaller LLM model (780M parameters) trained with RLAIF surpasses a much larger fine-tuned baseline with 7B parameters, achieving a 1.0{\%} higher code executability rate.
|
[
"Dutta, Sujan",
"Mahinder, Sayantan",
"Anantha, Raviteja",
"B",
"yopadhyay, Bortik"
] |
Applying {RLAIF} for Code Generation with {API}-usage in Lightweight {LLM}s
|
nlrse-1.4
|
Poster
|
2406.20060v1
|
https://aclanthology.org/2024.nlrse-1.5.bib
|
@inproceedings{liu-etal-2024-summequal,
title = "{S}umm{EQ}u{AL}: Summarization Evaluation via Question Answering using Large Language Models",
author = "Liu, Junyuan and
Shi, Zhengyan and
Lipani, Aldo",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Lipkin, Ben and
Neves Ribeiro, Danilo and
Wong, Lionel and
Ye, Xi and
Zhao, Wenting",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlrse-1.5",
pages = "46--55",
abstract = "Summarization is hard to evaluate due to its diverse and abstract nature. Although N-gram-based metrics like BLEU and ROUGE are prevalent, they often do not align well with human evaluations. While model-based alternatives such as BERTScore improve, they typically require extensive labelled data. The advent of Large Language Models (LLMs) presents a promising avenue for evaluation. To this end, we introduce SummEQuAL, a novel content-based framework using LLMs for unified, reproducible summarization evaluation. SummEQuAL evaluates summaries by comparing their content with the source document, employing a question-answering approach to gauge both recall and precision. To validate SummEQuAL{'}s effectiveness, we develop a dataset based on MultiWOZ. We conduct experiments on SummEval and our MultiWOZ-based dataset, showing that SummEQuAL largely improves the quality of summarization evaluation. Notably, SummEQuAL demonstrates a 19.7{\%} improvement over QuestEval in terms of sample-level Pearson correlation with human assessments of consistency on the SummEval dataset. Furthermore, it exceeds the performance of the BERTScore baseline by achieving a 17.3{\%} increase in Spearman correlation on our MultiWOZ-based dataset. Our study illuminates the potential of LLMs for a unified evaluation framework, setting a new paradigm for future summarization evaluation.",
}
|
Summarization is hard to evaluate due to its diverse and abstract nature. Although N-gram-based metrics like BLEU and ROUGE are prevalent, they often do not align well with human evaluations. While model-based alternatives such as BERTScore improve, they typically require extensive labelled data. The advent of Large Language Models (LLMs) presents a promising avenue for evaluation. To this end, we introduce SummEQuAL, a novel content-based framework using LLMs for unified, reproducible summarization evaluation. SummEQuAL evaluates summaries by comparing their content with the source document, employing a question-answering approach to gauge both recall and precision. To validate SummEQuAL{'}s effectiveness, we develop a dataset based on MultiWOZ. We conduct experiments on SummEval and our MultiWOZ-based dataset, showing that SummEQuAL largely improves the quality of summarization evaluation. Notably, SummEQuAL demonstrates a 19.7{\%} improvement over QuestEval in terms of sample-level Pearson correlation with human assessments of consistency on the SummEval dataset. Furthermore, it exceeds the performance of the BERTScore baseline by achieving a 17.3{\%} increase in Spearman correlation on our MultiWOZ-based dataset. Our study illuminates the potential of LLMs for a unified evaluation framework, setting a new paradigm for future summarization evaluation.
|
[
"Liu, Junyuan",
"Shi, Zhengyan",
"Lipani, Aldo"
] |
{S}umm{EQ}u{AL}: Summarization Evaluation via Question Answering using Large Language Models
|
nlrse-1.5
|
Poster
|
2407.13998v1
|
https://aclanthology.org/2024.nlrse-1.6.bib
|
@inproceedings{kirtania-etal-2024-logic,
title = "{LOGIC}-{LM}++: Multi-Step Refinement for Symbolic Formulations",
author = "Kirtania, Shashank and
Gupta, Priyanshu and
Radhakrishna, Arjun",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Lipkin, Ben and
Neves Ribeiro, Danilo and
Wong, Lionel and
Ye, Xi and
Zhao, Wenting",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlrse-1.6",
pages = "56--63",
abstract = "In this paper we examine the limitations of Large Language Models (LLMs) for complex reasoning tasks. While current approaches leverage formal languages as intermediate representation for these reasoning problems, they still struggle with generating intermediate for-mal specifications with great correctness and in refining these representations. To address these issues, this paper proposes Logic-LM++, an improvement on Logic-LM (Pan et al., 2023). It uses the ability of LLMs to do pairwise comparisons, allowing the evaluation of the refinements suggested by the LLM. The paper demonstrates that Logic-LM++ outperforms Logic-LM and LLM based techniques on natural language reasoning tasks on two datasets, FOLIO, ProofWriter and AR-LSAT. Logic-LM++ show an average improvement of 18.5{\%} on standard prompting, 12.3{\%} on chain of thought prompting and 5{\%} on Logic-LM.",
}
|
In this paper we examine the limitations of Large Language Models (LLMs) for complex reasoning tasks. While current approaches leverage formal languages as intermediate representation for these reasoning problems, they still struggle with generating intermediate for-mal specifications with great correctness and in refining these representations. To address these issues, this paper proposes Logic-LM++, an improvement on Logic-LM (Pan et al., 2023). It uses the ability of LLMs to do pairwise comparisons, allowing the evaluation of the refinements suggested by the LLM. The paper demonstrates that Logic-LM++ outperforms Logic-LM and LLM based techniques on natural language reasoning tasks on two datasets, FOLIO, ProofWriter and AR-LSAT. Logic-LM++ show an average improvement of 18.5{\%} on standard prompting, 12.3{\%} on chain of thought prompting and 5{\%} on Logic-LM.
|
[
"Kirtania, Shashank",
"Gupta, Priyanshu",
"Radhakrishna, Arjun"
] |
{LOGIC}-{LM}++: Multi-Step Refinement for Symbolic Formulations
|
nlrse-1.6
|
Poster
|
1002.2698v2
|
https://aclanthology.org/2024.nlrse-1.7.bib
|
@inproceedings{chen-etal-2024-good,
title = "From Good to Great: Improving Math Reasoning with Tool-Augmented Interleaf Prompting",
author = "Chen, Nuo and
Li, Hongguang and
Wang, Baoyuan and
Li, Jia",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Lipkin, Ben and
Neves Ribeiro, Danilo and
Wong, Lionel and
Ye, Xi and
Zhao, Wenting",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlrse-1.7",
pages = "64--79",
abstract = "This paper investigates the performance of Large Language Models (LLMs) and Tool-augmented LLMs in tackling complex mathematical reasoning tasks. We introduce IMR-TIP: Improving Math Reasoning with Tool-augmented Interleaf Prompting, a framework that combines the strengths of both LLMs and Tool-augmented LLMs. IMR-TIP follows the {``}From Good to Great{''} concept, collecting multiple potential solutions from both LLMs and their Tool-Augmented counterparts for the same math problem, and then selecting or re-generating the most accurate answer after cross-checking these solutions via tool-augmented interleaf prompting. The framework incorporates two key aspects: self-prompt and tool-augmented interleaf prompting (TIP). The former allows LLMs to autonomously refine and improve an initial prompt related to tool usage, while the latter enables LLMs to derive the final answer by dynamically analyzing the problem, cross-checking potential solutions, and revising previous reasoning hints in an interleaved manner. Experimental analysis shows that IMR-TIP achieves enhanced mathematical capabilities and outperforms traditional LLMs and tool-augmented LLMs in accuracy and reasoning diversity on math reasoning tasks. For instance, IMR-TIP can improve Tool-augmented ChatGPT on GSM8K-Hard from 56.0{\%} to 65.2 {\%}.",
}
|
This paper investigates the performance of Large Language Models (LLMs) and Tool-augmented LLMs in tackling complex mathematical reasoning tasks. We introduce IMR-TIP: Improving Math Reasoning with Tool-augmented Interleaf Prompting, a framework that combines the strengths of both LLMs and Tool-augmented LLMs. IMR-TIP follows the {``}From Good to Great{''} concept, collecting multiple potential solutions from both LLMs and their Tool-Augmented counterparts for the same math problem, and then selecting or re-generating the most accurate answer after cross-checking these solutions via tool-augmented interleaf prompting. The framework incorporates two key aspects: self-prompt and tool-augmented interleaf prompting (TIP). The former allows LLMs to autonomously refine and improve an initial prompt related to tool usage, while the latter enables LLMs to derive the final answer by dynamically analyzing the problem, cross-checking potential solutions, and revising previous reasoning hints in an interleaved manner. Experimental analysis shows that IMR-TIP achieves enhanced mathematical capabilities and outperforms traditional LLMs and tool-augmented LLMs in accuracy and reasoning diversity on math reasoning tasks. For instance, IMR-TIP can improve Tool-augmented ChatGPT on GSM8K-Hard from 56.0{\%} to 65.2 {\%}.
|
[
"Chen, Nuo",
"Li, Hongguang",
"Wang, Baoyuan",
"Li, Jia"
] |
From Good to Great: Improving Math Reasoning with Tool-Augmented Interleaf Prompting
|
nlrse-1.7
|
Poster
|
2401.05384v1
|
https://aclanthology.org/2024.privatenlp-1.1.bib
|
@inproceedings{galli-etal-2024-noisy,
title = "Noisy Neighbors: Efficient membership inference attacks against {LLM}s",
author = "Galli, Filippo and
Melis, Luca and
Cucinotta, Tommaso",
editor = "Habernal, Ivan and
Ghanavati, Sepideh and
Ravichander, Abhilasha and
Jain, Vijayanta and
Thaine, Patricia and
Igamberdiev, Timour and
Mireshghallah, Niloofar and
Feyisetan, Oluwaseyi",
booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.privatenlp-1.1",
pages = "1--6",
abstract = "The potential of transformer-based LLMs risks being hindered by privacy concerns due to their reliance on extensive datasets, possibly including sensitive information. Regulatory measures like GDPR and CCPA call for using robust auditing tools to address potential privacy issues, with Membership Inference Attacks (MIA) being the primary method for assessing LLMs{'} privacy risks. Differently from traditional MIA approaches, often requiring computationally intensive training of additional models, this paper introduces an efficient methodology that generates noisy neighbors for a target sample by adding stochastic noise in the embedding space, requiring operating the target model in inference mode only. Our findings demonstrate that this approach closely matches the effectiveness of employing shadow models, showing its usability in practical privacy auditing scenarios.",
}
|
The potential of transformer-based LLMs risks being hindered by privacy concerns due to their reliance on extensive datasets, possibly including sensitive information. Regulatory measures like GDPR and CCPA call for using robust auditing tools to address potential privacy issues, with Membership Inference Attacks (MIA) being the primary method for assessing LLMs{'} privacy risks. Differently from traditional MIA approaches, often requiring computationally intensive training of additional models, this paper introduces an efficient methodology that generates noisy neighbors for a target sample by adding stochastic noise in the embedding space, requiring operating the target model in inference mode only. Our findings demonstrate that this approach closely matches the effectiveness of employing shadow models, showing its usability in practical privacy auditing scenarios.
|
[
"Galli, Filippo",
"Melis, Luca",
"Cucinotta, Tommaso"
] |
Noisy Neighbors: Efficient membership inference attacks against {LLM}s
|
privatenlp-1.1
|
Poster
|
2406.16565v1
|
https://aclanthology.org/2024.privatenlp-1.2.bib
|
@inproceedings{zyskind-etal-2024-dont,
title = "Don{'}t forget private retrieval: distributed private similarity search for large language models",
author = "Zyskind, Guy and
South, Tobin and
Pentland, Alex",
editor = "Habernal, Ivan and
Ghanavati, Sepideh and
Ravichander, Abhilasha and
Jain, Vijayanta and
Thaine, Patricia and
Igamberdiev, Timour and
Mireshghallah, Niloofar and
Feyisetan, Oluwaseyi",
booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.privatenlp-1.2",
pages = "7--19",
abstract = "While the flexible capabilities of large language models (LLMs) allow them to answer a range of queries based on existing learned knowledge, information retrieval to augment generation is an important tool to allow LLMs to answer questions on information not included in pre-training data. Such private information is increasingly being generated in a wide array of distributed contexts by organizations and individuals. Performing such information retrieval using neural embeddings of queries and documents always leaked information about queries and database content unless both were stored locally. We present Private Retrieval Augmented Generation (PRAG), an approach that uses multi-party computation (MPC) to securely transmit queries to a distributed set of servers containing a privately constructed database to return top-k and approximate top-k documents. This is a first-of-its-kind approach to dense information retrieval that ensures no server observes a client{'}s query or can see the database content. The approach introduces a novel MPC friendly protocol for inverted file approximate search (IVF) that allows for fast document search over distributed and private data in sublinear communication complexity. This work presents new avenues through which data for use in LLMs can be accessed and used without needing to centralize or forgo privacy.",
}
|
While the flexible capabilities of large language models (LLMs) allow them to answer a range of queries based on existing learned knowledge, information retrieval to augment generation is an important tool to allow LLMs to answer questions on information not included in pre-training data. Such private information is increasingly being generated in a wide array of distributed contexts by organizations and individuals. Performing such information retrieval using neural embeddings of queries and documents always leaked information about queries and database content unless both were stored locally. We present Private Retrieval Augmented Generation (PRAG), an approach that uses multi-party computation (MPC) to securely transmit queries to a distributed set of servers containing a privately constructed database to return top-k and approximate top-k documents. This is a first-of-its-kind approach to dense information retrieval that ensures no server observes a client{'}s query or can see the database content. The approach introduces a novel MPC friendly protocol for inverted file approximate search (IVF) that allows for fast document search over distributed and private data in sublinear communication complexity. This work presents new avenues through which data for use in LLMs can be accessed and used without needing to centralize or forgo privacy.
|
[
"Zyskind, Guy",
"South, Tobin",
"Pentl",
", Alex"
] |
Don{'}t forget private retrieval: distributed private similarity search for large language models
|
privatenlp-1.2
|
Poster
|
2311.12955v1
|
https://aclanthology.org/2024.privatenlp-1.3.bib
|
@inproceedings{arnold-etal-2024-characterizing,
title = "Characterizing Stereotypical Bias from Privacy-preserving Pre-Training",
author = {Arnold, Stefan and
Gr{\"o}bner, Rene and
Schreiner, Annika},
editor = "Habernal, Ivan and
Ghanavati, Sepideh and
Ravichander, Abhilasha and
Jain, Vijayanta and
Thaine, Patricia and
Igamberdiev, Timour and
Mireshghallah, Niloofar and
Feyisetan, Oluwaseyi",
booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.privatenlp-1.3",
pages = "20--28",
abstract = "Differential Privacy (DP) can be applied to raw text by exploiting the spatial arrangement of words in an embedding space. We investigate the implications of such text privatization on Language Models (LMs) and their tendency towards stereotypical associations. Since previous studies documented that linguistic proficiency correlates with stereotypical bias, one could assume that techniques for text privatization, which are known to degrade language modeling capabilities, would cancel out undesirable biases. By testing BERT models trained on texts containing biased statements primed with varying degrees of privacy, our study reveals that while stereotypical bias generally diminishes when privacy is tightened, text privatization does not uniformly equate to diminishing bias across all social domains. This highlights the need for careful diagnosis of bias in LMs that undergo text privatization.",
}
|
Differential Privacy (DP) can be applied to raw text by exploiting the spatial arrangement of words in an embedding space. We investigate the implications of such text privatization on Language Models (LMs) and their tendency towards stereotypical associations. Since previous studies documented that linguistic proficiency correlates with stereotypical bias, one could assume that techniques for text privatization, which are known to degrade language modeling capabilities, would cancel out undesirable biases. By testing BERT models trained on texts containing biased statements primed with varying degrees of privacy, our study reveals that while stereotypical bias generally diminishes when privacy is tightened, text privatization does not uniformly equate to diminishing bias across all social domains. This highlights the need for careful diagnosis of bias in LMs that undergo text privatization.
|
[
"Arnold, Stefan",
"Gr{\\\"o}bner, Rene",
"Schreiner, Annika"
] |
Characterizing Stereotypical Bias from Privacy-preserving Pre-Training
|
privatenlp-1.3
|
Poster
|
2408.00162v1
|
https://aclanthology.org/2024.privatenlp-1.4.bib
|
@inproceedings{harel-etal-2024-protecting,
title = "Protecting Privacy in Classifiers by Token Manipulation",
author = "Harel, Re{'}em and
Elboher, Yair and
Pinter, Yuval",
editor = "Habernal, Ivan and
Ghanavati, Sepideh and
Ravichander, Abhilasha and
Jain, Vijayanta and
Thaine, Patricia and
Igamberdiev, Timour and
Mireshghallah, Niloofar and
Feyisetan, Oluwaseyi",
booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.privatenlp-1.4",
pages = "29--38",
abstract = "Using language models as a remote service entails sending private information to an untrusted provider. In addition, potential eavesdroppers can intercept the messages, thereby exposing the information. In this work, we explore the prospects of avoiding such data exposure at the level of text manipulation. We focus on text classification models, examining various token mapping and contextualized manipulation functions in order to see whether classifier accuracy may be maintained while keeping the original text unrecoverable. We find that although some token mapping functions are easy and straightforward to implement, they heavily influence performance on the downstream task, and via a sophisticated attacker can be reconstructed. In comparison, the contextualized manipulation provides an improvement in performance.",
}
|
Using language models as a remote service entails sending private information to an untrusted provider. In addition, potential eavesdroppers can intercept the messages, thereby exposing the information. In this work, we explore the prospects of avoiding such data exposure at the level of text manipulation. We focus on text classification models, examining various token mapping and contextualized manipulation functions in order to see whether classifier accuracy may be maintained while keeping the original text unrecoverable. We find that although some token mapping functions are easy and straightforward to implement, they heavily influence performance on the downstream task, and via a sophisticated attacker can be reconstructed. In comparison, the contextualized manipulation provides an improvement in performance.
|
[
"Harel, Re{'}em",
"Elboher, Yair",
"Pinter, Yuval"
] |
Protecting Privacy in Classifiers by Token Manipulation
|
privatenlp-1.4
|
Poster
|
2407.01334v2
|
https://aclanthology.org/2024.privatenlp-1.5.bib
|
@inproceedings{meisenbacher-etal-2024-collocation,
title = "A Collocation-based Method for Addressing Challenges in Word-level Metric Differential Privacy",
author = "Meisenbacher, Stephen and
Chevli, Maulik and
Matthes, Florian",
editor = "Habernal, Ivan and
Ghanavati, Sepideh and
Ravichander, Abhilasha and
Jain, Vijayanta and
Thaine, Patricia and
Igamberdiev, Timour and
Mireshghallah, Niloofar and
Feyisetan, Oluwaseyi",
booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.privatenlp-1.5",
pages = "39--51",
abstract = "Applications of Differential Privacy (DP) in NLP must distinguish between the syntactic level on which a proposed mechanism operates, often taking the form of *word-level* or *document-level* privatization. Recently, several word-level *Metric* Differential Privacy approaches have been proposed, which rely on this generalized DP notion for operating in word embedding spaces. These approaches, however, often fail to produce semantically coherent textual outputs, and their application at the sentence- or document-level is only possible by a basic composition of word perturbations. In this work, we strive to address these challenges by operating *between* the word and sentence levels, namely with *collocations*. By perturbing n-grams rather than single words, we devise a method where composed privatized outputs have higher semantic coherence and variable length. This is accomplished by constructing an embedding model based on frequently occurring word groups, in which unigram words co-exist with bi- and trigram collocations. We evaluate our method in utility and privacy tests, which make a clear case for tokenization strategies beyond the word level.",
}
|
Applications of Differential Privacy (DP) in NLP must distinguish between the syntactic level on which a proposed mechanism operates, often taking the form of *word-level* or *document-level* privatization. Recently, several word-level *Metric* Differential Privacy approaches have been proposed, which rely on this generalized DP notion for operating in word embedding spaces. These approaches, however, often fail to produce semantically coherent textual outputs, and their application at the sentence- or document-level is only possible by a basic composition of word perturbations. In this work, we strive to address these challenges by operating *between* the word and sentence levels, namely with *collocations*. By perturbing n-grams rather than single words, we devise a method where composed privatized outputs have higher semantic coherence and variable length. This is accomplished by constructing an embedding model based on frequently occurring word groups, in which unigram words co-exist with bi- and trigram collocations. We evaluate our method in utility and privacy tests, which make a clear case for tokenization strategies beyond the word level.
|
[
"Meisenbacher, Stephen",
"Chevli, Maulik",
"Matthes, Florian"
] |
A Collocation-based Method for Addressing Challenges in Word-level Metric Differential Privacy
|
privatenlp-1.5
|
Poster
|
2407.00638v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.