Update README.md
Browse files
README.md
CHANGED
@@ -16,9 +16,9 @@ library_name: transformers
|
|
16 |
|
17 |
The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
|
18 |
|
19 |
-
*Mistral-v0.1-Italian-CLP* is a
|
20 |
|
21 |
-
The tokenizer of this
|
22 |
|
23 |
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
|
24 |
|
@@ -26,7 +26,7 @@ The tokenizer of this models after adaptation is the same of [Minverva-3B](https
|
|
26 |
|
27 |
## Data used for the adaptation
|
28 |
|
29 |
-
The **Mistral-7B-v0.1-Adapted**
|
30 |
The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
|
31 |
|
32 |
|
@@ -49,6 +49,8 @@ pipeline = transformers.pipeline(
|
|
49 |
pipeline("Cosa si può fare in una bella giornata di sole?")
|
50 |
```
|
51 |
|
|
|
|
|
52 |
## Citation
|
53 |
|
54 |
If you use any part of this work, please consider citing the paper as follows:
|
@@ -63,6 +65,4 @@ If you use any part of this work, please consider citing the paper as follows:
|
|
63 |
primaryClass={cs.CL},
|
64 |
url={https://arxiv.org/abs/2504.17025},
|
65 |
}
|
66 |
-
```
|
67 |
-
|
68 |
-
Code: https://github.com/Andrew-Wyn/Italian-LLM-Adaptation
|
|
|
16 |
|
17 |
The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
|
18 |
|
19 |
+
*Mistral-v0.1-Italian-CLP* is a continually trained mistral model, after tokenizer substitution.
|
20 |
|
21 |
+
The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
|
22 |
|
23 |
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
|
24 |
|
|
|
26 |
|
27 |
## Data used for the adaptation
|
28 |
|
29 |
+
The **Mistral-7B-v0.1-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
|
30 |
The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
|
31 |
|
32 |
|
|
|
49 |
pipeline("Cosa si può fare in una bella giornata di sole?")
|
50 |
```
|
51 |
|
52 |
+
Code: https://github.com/SapienzaNLP/sava
|
53 |
+
|
54 |
## Citation
|
55 |
|
56 |
If you use any part of this work, please consider citing the paper as follows:
|
|
|
65 |
primaryClass={cs.CL},
|
66 |
url={https://arxiv.org/abs/2504.17025},
|
67 |
}
|
68 |
+
```
|
|
|
|