|
--- |
|
title: README |
|
emoji: π |
|
colorFrom: pink |
|
colorTo: gray |
|
sdk: static |
|
pinned: false |
|
thumbnail: >- |
|
https://cdn-uploads.huggingface.co/production/uploads/64bfc4d55ce3d382c05c0f9a/1zPQcwqt9Li_gCvd04_2_.png |
|
--- |
|
|
|
**JQL-AI (pronounced Jackal-AI)** is a community of machine learning researchers committed to advancing the development of **multilingual foundation models**. |
|
|
|
## Latest Research |
|
|
|
- [Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models](https://huggingface.co/spaces/JQL-AI/JQL) |
|
- [Tokenizer Choice For LLM Training: Negligible or Crucial?](https://aclanthology.org/2024.findings-naacl.247/) |
|
- [Investigating Multilingual Instruction-Tuning: Do Polyglot Models Demand for Multilingual Instructions?](https://aclanthology.org/2024.emnlp-main.1159/) |
|
- [Do Multilingual Large Language Models Mitigate Stereotype Bias?](https://aclanthology.org/2024.c3nlp-1.6.pdf) |
|
|
|
--- |