Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
translation
transformers
### opus-mt-de-pl * source languages: de * target languages: pl * OPUS readme: [de-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.pl | 41.2 | 0.631 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-de-pl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "pl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-de-pon * source languages: de * target languages: pon * OPUS readme: [de-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pon/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.pon | 21.0 | 0.442 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-de-pon
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "pon", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### deu-tgl * source group: German * target group: Tagalog * OPUS readme: [deu-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-tgl/README.md) * model: transformer-align * source language(s): deu * target language(s): tgl_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.deu.tgl | 21.2 | 0.541 | ### System Info: - hf_name: deu-tgl - source_languages: deu - target_languages: tgl - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-tgl/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'tl'] - src_constituents: {'deu'} - tgt_constituents: {'tgl_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.test.txt - src_alpha3: deu - tgt_alpha3: tgl - short_pair: de-tl - chrF2_score: 0.541 - bleu: 21.2 - brevity_penalty: 1.0 - ref_len: 2329.0 - src_name: German - tgt_name: Tagalog - train_date: 2020-06-17 - src_alpha2: de - tgt_alpha2: tl - prefer_old: False - long_pair: deu-tgl - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["de", "tl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-de-tl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "tl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### deu-ukr * source group: German * target group: Ukrainian * OPUS readme: [deu-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ukr/README.md) * model: transformer-align * source language(s): deu * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.deu.ukr | 40.2 | 0.612 | ### System Info: - hf_name: deu-ukr - source_languages: deu - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'uk'] - src_constituents: {'deu'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ukr/opus-2020-06-17.test.txt - src_alpha3: deu - tgt_alpha3: ukr - short_pair: de-uk - chrF2_score: 0.612 - bleu: 40.2 - brevity_penalty: 0.9840000000000001 - ref_len: 54213.0 - src_name: German - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: de - tgt_alpha2: uk - prefer_old: False - long_pair: deu-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["de", "uk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-de-uk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### deu-vie * source group: German * target group: Vietnamese * OPUS readme: [deu-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-vie/README.md) * model: transformer-align * source language(s): deu * target language(s): vie * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.deu.vie | 25.0 | 0.443 | ### System Info: - hf_name: deu-vie - source_languages: deu - target_languages: vie - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-vie/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'vi'] - src_constituents: {'deu'} - tgt_constituents: {'vie', 'vie_Hani'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.test.txt - src_alpha3: deu - tgt_alpha3: vie - short_pair: de-vi - chrF2_score: 0.44299999999999995 - bleu: 25.0 - brevity_penalty: 1.0 - ref_len: 3768.0 - src_name: German - tgt_name: Vietnamese - train_date: 2020-06-17 - src_alpha2: de - tgt_alpha2: vi - prefer_old: False - long_pair: deu-vie - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["de", "vi"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-de-vi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "vi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### dra-eng * source group: Dravidian languages * target group: English * OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md) * model: transformer * source language(s): kan mal tam tel * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 | | Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 | | Tatoeba-test.multi.eng | 30.0 | 0.493 | | Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 | | Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 | ### System Info: - hf_name: dra-eng - source_languages: dra - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en'] - src_constituents: {'tam', 'kan', 'mal', 'tel'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt - src_alpha3: dra - tgt_alpha3: eng - short_pair: dra-en - chrF2_score: 0.493 - bleu: 30.0 - brevity_penalty: 1.0 - ref_len: 10641.0 - src_name: Dravidian languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: dra - tgt_alpha2: en - prefer_old: False - long_pair: dra-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ta", "kn", "ml", "te", "dra", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-dra-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ta", "kn", "ml", "te", "dra", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ee-de * source languages: ee * target languages: de * OPUS readme: [ee-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ee.de | 22.3 | 0.430 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ee-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ee", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ee-en * source languages: ee * target languages: en * OPUS readme: [ee-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ee.en | 39.3 | 0.556 | | Tatoeba.ee.en | 21.2 | 0.569 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ee-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ee", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ee-es * source languages: ee * target languages: es * OPUS readme: [ee-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ee.es | 26.4 | 0.449 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ee-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ee", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ee-fi * source languages: ee * target languages: fi * OPUS readme: [ee-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ee.fi | 25.0 | 0.482 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ee-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ee", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ee-fr * source languages: ee * target languages: fr * OPUS readme: [ee-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ee.fr | 27.1 | 0.450 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ee-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ee", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ee-sv * source languages: ee * target languages: sv * OPUS readme: [ee-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ee.sv | 28.9 | 0.472 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ee-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ee", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-efi-de * source languages: efi * target languages: de * OPUS readme: [efi-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.efi.de | 21.0 | 0.401 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-efi-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "efi", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-efi-en * source languages: efi * target languages: en * OPUS readme: [efi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.efi.en | 35.4 | 0.510 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-efi-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "efi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-efi-fi * source languages: efi * target languages: fi * OPUS readme: [efi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.efi.fi | 23.6 | 0.450 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-efi-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "efi", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-efi-fr * source languages: efi * target languages: fr * OPUS readme: [efi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.efi.fr | 25.1 | 0.419 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-efi-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "efi", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-efi-sv * source languages: efi * target languages: sv * OPUS readme: [efi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.efi.sv | 26.8 | 0.447 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-efi-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "efi", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ell-ara * source group: Modern Greek (1453-) * target group: Arabic * OPUS readme: [ell-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md) * model: transformer * source language(s): ell * target language(s): ara arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ell.ara | 21.9 | 0.485 | ### System Info: - hf_name: ell-ara - source_languages: ell - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['el', 'ar'] - src_constituents: {'ell'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt - src_alpha3: ell - tgt_alpha3: ara - short_pair: el-ar - chrF2_score: 0.485 - bleu: 21.9 - brevity_penalty: 0.972 - ref_len: 1686.0 - src_name: Modern Greek (1453-) - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: el - tgt_alpha2: ar - prefer_old: False - long_pair: ell-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["el", "ar"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-el-ar
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "el", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ell-epo * source group: Modern Greek (1453-) * target group: Esperanto * OPUS readme: [ell-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md) * model: transformer-align * source language(s): ell * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ell.epo | 32.4 | 0.517 | ### System Info: - hf_name: ell-epo - source_languages: ell - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['el', 'eo'] - src_constituents: {'ell'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt - src_alpha3: ell - tgt_alpha3: epo - short_pair: el-eo - chrF2_score: 0.517 - bleu: 32.4 - brevity_penalty: 0.9790000000000001 - ref_len: 3807.0 - src_name: Modern Greek (1453-) - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: el - tgt_alpha2: eo - prefer_old: False - long_pair: ell-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["el", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-el-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "el", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-el-fi * source languages: el * target languages: fi * OPUS readme: [el-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.el.fi | 25.3 | 0.517 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-el-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "el", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-el-fr * source languages: el * target languages: fr * OPUS readme: [el-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.el.fr | 63.0 | 0.741 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-el-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "el", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-el-sv * source languages: el * target languages: sv * OPUS readme: [el-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.el.sv | 23.6 | 0.498 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-el-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "el", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-INSULAR_CELTIC * source languages: en * target languages: ga,cy,br,gd,kw,gv * OPUS readme: [en-ga+cy+br+gd+kw+gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ga+cy+br+gd+kw+gv/README.md) * dataset: opus+techiaith+bt * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus+techiaith+bt-2020-04-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.zip) * test set translations: [opus+techiaith+bt-2020-04-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.test.txt) * test set scores: [opus+techiaith+bt-2020-04-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ga | 22.8 | 0.404 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-CELTIC
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "cel", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ROMANCE * source languages: en * target languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la * OPUS readme: [en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-04-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.zip) * test set translations: [opus-2020-04-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.test.txt) * test set scores: [opus-2020-04-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.la | 50.1 | 0.693 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ROMANCE
null
[ "transformers", "pytorch", "tf", "jax", "rust", "marian", "text2text-generation", "translation", "en", "roa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-aav * source group: English * target group: Austro-Asiatic languages * OPUS readme: [eng-aav](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aav/README.md) * model: transformer * source language(s): eng * target language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-hoc.eng.hoc | 0.1 | 0.033 | | Tatoeba-test.eng-kha.eng.kha | 0.4 | 0.043 | | Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.242 | | Tatoeba-test.eng-mnw.eng.mnw | 0.8 | 0.003 | | Tatoeba-test.eng.multi | 16.1 | 0.311 | | Tatoeba-test.eng-vie.eng.vie | 33.2 | 0.508 | ### System Info: - hf_name: eng-aav - source_languages: eng - target_languages: aav - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aav/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'vi', 'km', 'aav'] - src_constituents: {'eng'} - tgt_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: aav - short_pair: en-aav - chrF2_score: 0.311 - bleu: 16.1 - brevity_penalty: 1.0 - ref_len: 38261.0 - src_name: English - tgt_name: Austro-Asiatic languages - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: aav - prefer_old: False - long_pair: eng-aav - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "vi", "km", "aav"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-aav
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "vi", "km", "aav", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-af * source languages: en * target languages: af * OPUS readme: [en-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-af/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.af | 56.1 | 0.741 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-af
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "af", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-afa * source group: English * target group: Afro-Asiatic languages * OPUS readme: [eng-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md) * model: transformer * source language(s): eng * target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-amh.eng.amh | 11.6 | 0.504 | | Tatoeba-test.eng-ara.eng.ara | 12.0 | 0.404 | | Tatoeba-test.eng-hau.eng.hau | 10.2 | 0.429 | | Tatoeba-test.eng-heb.eng.heb | 32.3 | 0.551 | | Tatoeba-test.eng-kab.eng.kab | 1.6 | 0.191 | | Tatoeba-test.eng-mlt.eng.mlt | 17.7 | 0.551 | | Tatoeba-test.eng.multi | 14.4 | 0.375 | | Tatoeba-test.eng-rif.eng.rif | 1.7 | 0.103 | | Tatoeba-test.eng-shy.eng.shy | 0.8 | 0.090 | | Tatoeba-test.eng-som.eng.som | 16.0 | 0.429 | | Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.238 | ### System Info: - hf_name: eng-afa - source_languages: eng - target_languages: afa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa'] - src_constituents: {'eng'} - tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: afa - short_pair: en-afa - chrF2_score: 0.375 - bleu: 14.4 - brevity_penalty: 1.0 - ref_len: 58110.0 - src_name: English - tgt_name: Afro-Asiatic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: afa - prefer_old: False - long_pair: eng-afa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "so", "ti", "am", "he", "mt", "ar", "afa"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-afa
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "so", "ti", "am", "he", "mt", "ar", "afa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-alv * source group: English * target group: Atlantic-Congo languages * OPUS readme: [eng-alv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md) * model: transformer * source language(s): eng * target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-ewe.eng.ewe | 4.9 | 0.212 | | Tatoeba-test.eng-ful.eng.ful | 0.6 | 0.079 | | Tatoeba-test.eng-ibo.eng.ibo | 3.5 | 0.255 | | Tatoeba-test.eng-kin.eng.kin | 10.5 | 0.510 | | Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.273 | | Tatoeba-test.eng-lug.eng.lug | 5.3 | 0.340 | | Tatoeba-test.eng.multi | 11.4 | 0.429 | | Tatoeba-test.eng-nya.eng.nya | 18.1 | 0.595 | | Tatoeba-test.eng-run.eng.run | 13.9 | 0.484 | | Tatoeba-test.eng-sag.eng.sag | 5.3 | 0.194 | | Tatoeba-test.eng-sna.eng.sna | 26.2 | 0.623 | | Tatoeba-test.eng-swa.eng.swa | 1.0 | 0.141 | | Tatoeba-test.eng-toi.eng.toi | 7.0 | 0.224 | | Tatoeba-test.eng-tso.eng.tso | 46.7 | 0.643 | | Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.359 | | Tatoeba-test.eng-wol.eng.wol | 6.8 | 0.191 | | Tatoeba-test.eng-xho.eng.xho | 27.1 | 0.629 | | Tatoeba-test.eng-yor.eng.yor | 17.4 | 0.356 | | Tatoeba-test.eng-zul.eng.zul | 34.1 | 0.729 | ### System Info: - hf_name: eng-alv - source_languages: eng - target_languages: alv - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv'] - src_constituents: {'eng'} - tgt_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: alv - short_pair: en-alv - chrF2_score: 0.429 - bleu: 11.4 - brevity_penalty: 1.0 - ref_len: 10603.0 - src_name: English - tgt_name: Atlantic-Congo languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: alv - prefer_old: False - long_pair: eng-alv - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "alv"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-alv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "alv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-ara * source group: English * target group: Arabic * OPUS readme: [eng-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md) * model: transformer * source language(s): eng * target language(s): acm afb apc apc_Latn ara ara_Latn arq arq_Latn ary arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.ara | 14.0 | 0.437 | ### System Info: - hf_name: eng-ara - source_languages: eng - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ar'] - src_constituents: {'eng'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt - src_alpha3: eng - tgt_alpha3: ara - short_pair: en-ar - chrF2_score: 0.43700000000000006 - bleu: 14.0 - brevity_penalty: 1.0 - ref_len: 58935.0 - src_name: English - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: en - tgt_alpha2: ar - prefer_old: False - long_pair: eng-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "ar"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ar
null
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "en", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-aze * source group: English * target group: Azerbaijani * OPUS readme: [eng-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md) * model: transformer-align * source language(s): eng * target language(s): aze_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.aze | 18.6 | 0.477 | ### System Info: - hf_name: eng-aze - source_languages: eng - target_languages: aze - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'az'] - src_constituents: {'eng'} - tgt_constituents: {'aze_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt - src_alpha3: eng - tgt_alpha3: aze - short_pair: en-az - chrF2_score: 0.47700000000000004 - bleu: 18.6 - brevity_penalty: 1.0 - ref_len: 13012.0 - src_name: English - tgt_name: Azerbaijani - train_date: 2020-06-16 - src_alpha2: en - tgt_alpha2: az - prefer_old: False - long_pair: eng-aze - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "az"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-az
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "az", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-bat * source group: English * target group: Baltic languages * OPUS readme: [eng-bat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md) * model: transformer * source language(s): eng * target language(s): lav lit ltg prg_Latn sgs * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv-englav.eng.lav | 24.0 | 0.546 | | newsdev2019-enlt-englit.eng.lit | 20.9 | 0.533 | | newstest2017-enlv-englav.eng.lav | 18.3 | 0.506 | | newstest2019-enlt-englit.eng.lit | 13.6 | 0.466 | | Tatoeba-test.eng-lav.eng.lav | 42.8 | 0.652 | | Tatoeba-test.eng-lit.eng.lit | 37.1 | 0.650 | | Tatoeba-test.eng.multi | 37.0 | 0.616 | | Tatoeba-test.eng-prg.eng.prg | 0.5 | 0.130 | | Tatoeba-test.eng-sgs.eng.sgs | 4.1 | 0.178 | ### System Info: - hf_name: eng-bat - source_languages: eng - target_languages: bat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'lt', 'lv', 'bat'] - src_constituents: {'eng'} - tgt_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: bat - short_pair: en-bat - chrF2_score: 0.616 - bleu: 37.0 - brevity_penalty: 0.956 - ref_len: 26417.0 - src_name: English - tgt_name: Baltic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: bat - prefer_old: False - long_pair: eng-bat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "lt", "lv", "bat"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-bat
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lt", "lv", "bat", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-bcl * source languages: en * target languages: bcl * OPUS readme: [en-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bcl/README.md) * dataset: opus+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.zip) * test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.test.txt) * test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bcl | 54.3 | 0.722 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-bcl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bcl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-bem * source languages: en * target languages: bem * OPUS readme: [en-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bem/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bem | 29.7 | 0.532 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-bem
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bem", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ber * source languages: en * target languages: ber * OPUS readme: [en-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ber/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ber | 29.7 | 0.544 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ber
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ber", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-bul * source group: English * target group: Bulgarian * OPUS readme: [eng-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md) * model: transformer * source language(s): eng * target language(s): bul bul_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.bul | 50.6 | 0.680 | ### System Info: - hf_name: eng-bul - source_languages: eng - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bg'] - src_constituents: {'eng'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt - src_alpha3: eng - tgt_alpha3: bul - short_pair: en-bg - chrF2_score: 0.68 - bleu: 50.6 - brevity_penalty: 0.96 - ref_len: 69504.0 - src_name: English - tgt_name: Bulgarian - train_date: 2020-07-03 - src_alpha2: en - tgt_alpha2: bg - prefer_old: False - long_pair: eng-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "bg"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-bg
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-bi * source languages: en * target languages: bi * OPUS readme: [en-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bi | 36.4 | 0.543 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-bi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-bnt * source group: English * target group: Bantu languages * OPUS readme: [eng-bnt](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md) * model: transformer * source language(s): eng * target language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kin.eng.kin | 12.5 | 0.519 | | Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.277 | | Tatoeba-test.eng-lug.eng.lug | 4.8 | 0.415 | | Tatoeba-test.eng.multi | 12.1 | 0.449 | | Tatoeba-test.eng-nya.eng.nya | 22.1 | 0.616 | | Tatoeba-test.eng-run.eng.run | 13.2 | 0.492 | | Tatoeba-test.eng-sna.eng.sna | 32.1 | 0.669 | | Tatoeba-test.eng-swa.eng.swa | 1.7 | 0.180 | | Tatoeba-test.eng-toi.eng.toi | 10.7 | 0.266 | | Tatoeba-test.eng-tso.eng.tso | 26.9 | 0.631 | | Tatoeba-test.eng-umb.eng.umb | 5.2 | 0.295 | | Tatoeba-test.eng-xho.eng.xho | 22.6 | 0.615 | | Tatoeba-test.eng-zul.eng.zul | 41.1 | 0.769 | ### System Info: - hf_name: eng-bnt - source_languages: eng - target_languages: bnt - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt'] - src_constituents: {'eng'} - tgt_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: bnt - short_pair: en-bnt - chrF2_score: 0.449 - bleu: 12.1 - brevity_penalty: 1.0 - ref_len: 9989.0 - src_name: English - tgt_name: Bantu languages - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: bnt - prefer_old: False - long_pair: eng-bnt - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-bnt
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-bzs * source languages: en * target languages: bzs * OPUS readme: [en-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bzs | 43.4 | 0.612 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-bzs
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bzs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ca * source languages: en * target languages: ca * OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ca | 47.2 | 0.665 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ca
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ceb * source languages: en * target languages: ceb * OPUS readme: [en-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ceb/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ceb | 51.3 | 0.704 | | Tatoeba.en.ceb | 31.3 | 0.600 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ceb
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ceb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-cel * source group: English * target group: Celtic languages * OPUS readme: [eng-cel](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md) * model: transformer * source language(s): eng * target language(s): bre cor cym gla gle glv * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-bre.eng.bre | 11.5 | 0.338 | | Tatoeba-test.eng-cor.eng.cor | 0.3 | 0.095 | | Tatoeba-test.eng-cym.eng.cym | 31.0 | 0.549 | | Tatoeba-test.eng-gla.eng.gla | 7.6 | 0.317 | | Tatoeba-test.eng-gle.eng.gle | 35.9 | 0.582 | | Tatoeba-test.eng-glv.eng.glv | 9.9 | 0.454 | | Tatoeba-test.eng.multi | 18.0 | 0.342 | ### System Info: - hf_name: eng-cel - source_languages: eng - target_languages: cel - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel'] - src_constituents: {'eng'} - tgt_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: cel - short_pair: en-cel - chrF2_score: 0.342 - bleu: 18.0 - brevity_penalty: 0.9590000000000001 - ref_len: 45370.0 - src_name: English - tgt_name: Celtic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: cel - prefer_old: False - long_pair: eng-cel - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "gd", "ga", "br", "kw", "gv", "cy", "cel"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-cel
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gd", "ga", "br", "kw", "gv", "cy", "cel", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-chk * source languages: en * target languages: chk * OPUS readme: [en-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-chk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.chk | 26.1 | 0.468 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-chk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "chk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-cpf * source group: English * target group: Creoles and pidgins, French‑based * OPUS readme: [eng-cpf](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md) * model: transformer * source language(s): eng * target language(s): gcf_Latn hat mfe * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-gcf.eng.gcf | 6.2 | 0.262 | | Tatoeba-test.eng-hat.eng.hat | 25.7 | 0.451 | | Tatoeba-test.eng-mfe.eng.mfe | 80.1 | 0.900 | | Tatoeba-test.eng.multi | 15.9 | 0.354 | ### System Info: - hf_name: eng-cpf - source_languages: eng - target_languages: cpf - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ht', 'cpf'] - src_constituents: {'eng'} - tgt_constituents: {'gcf_Latn', 'hat', 'mfe'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: cpf - short_pair: en-cpf - chrF2_score: 0.354 - bleu: 15.9 - brevity_penalty: 1.0 - ref_len: 1012.0 - src_name: English - tgt_name: Creoles and pidgins, French‑based - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: cpf - prefer_old: False - long_pair: eng-cpf - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "ht", "cpf"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-cpf
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ht", "cpf", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-cpp * source group: English * target group: Creoles and pidgins, Portuguese-based * OPUS readme: [eng-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md) * model: transformer * source language(s): eng * target language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-msa.eng.msa | 32.6 | 0.573 | | Tatoeba-test.eng.multi | 32.7 | 0.574 | | Tatoeba-test.eng-pap.eng.pap | 42.5 | 0.633 | ### System Info: - hf_name: eng-cpp - source_languages: eng - target_languages: cpp - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'id', 'cpp'] - src_constituents: {'eng'} - tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: cpp - short_pair: en-cpp - chrF2_score: 0.574 - bleu: 32.7 - brevity_penalty: 0.996 - ref_len: 34010.0 - src_name: English - tgt_name: Creoles and pidgins, Portuguese-based - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: cpp - prefer_old: False - long_pair: eng-cpp - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "id", "cpp"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-cpp
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "id", "cpp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-crs * source languages: en * target languages: crs * OPUS readme: [en-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-crs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.crs | 45.2 | 0.617 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-crs
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "crs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-cs * source languages: en * target languages: cs * OPUS readme: [en-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.cs | 22.8 | 0.507 | | news-test2008.en.cs | 20.7 | 0.485 | | newstest2009.en.cs | 21.8 | 0.500 | | newstest2010.en.cs | 22.1 | 0.505 | | newstest2011.en.cs | 23.2 | 0.507 | | newstest2012.en.cs | 20.8 | 0.482 | | newstest2013.en.cs | 24.7 | 0.514 | | newstest2015-encs.en.cs | 24.9 | 0.527 | | newstest2016-encs.en.cs | 26.7 | 0.540 | | newstest2017-encs.en.cs | 22.7 | 0.503 | | newstest2018-encs.en.cs | 22.9 | 0.504 | | newstest2019-encs.en.cs | 24.9 | 0.518 | | Tatoeba.en.cs | 46.1 | 0.647 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-cs
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "cs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-cus * source group: English * target group: Cushitic languages * OPUS readme: [eng-cus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md) * model: transformer * source language(s): eng * target language(s): som * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.multi | 16.0 | 0.173 | | Tatoeba-test.eng-som.eng.som | 16.0 | 0.173 | ### System Info: - hf_name: eng-cus - source_languages: eng - target_languages: cus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'so', 'cus'] - src_constituents: {'eng'} - tgt_constituents: {'som'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: cus - short_pair: en-cus - chrF2_score: 0.17300000000000001 - bleu: 16.0 - brevity_penalty: 1.0 - ref_len: 3.0 - src_name: English - tgt_name: Cushitic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: cus - prefer_old: False - long_pair: eng-cus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "so", "cus"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-cus
null
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "en", "so", "cus", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-cy * source languages: en * target languages: cy * OPUS readme: [en-cy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cy/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.cy | 25.3 | 0.487 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-cy
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "cy", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-da * source languages: en * target languages: da * OPUS readme: [en-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-da/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.da | 60.4 | 0.745 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-da
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "da", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-de ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation - **Language(s):** - Source Language: English - Target Language: German - **License:** CC-BY-4.0 - **Resources for more information:** - [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Uses #### Direct Use This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Further details about the dataset for this model can be found in the OPUS readme: [en-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-de/README.md) #### Training Data ##### Preprocessing * pre-processing: normalization + SentencePiece * dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT) * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.test.txt) ## Evaluation #### Results * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.eval.txt) #### Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.de | 23.5 | 0.540 | | news-test2008.en.de | 23.5 | 0.529 | | newstest2009.en.de | 22.3 | 0.530 | | newstest2010.en.de | 24.9 | 0.544 | | newstest2011.en.de | 22.5 | 0.524 | | newstest2012.en.de | 23.0 | 0.525 | | newstest2013.en.de | 26.9 | 0.553 | | newstest2015-ende.en.de | 31.1 | 0.594 | | newstest2016-ende.en.de | 37.0 | 0.636 | | newstest2017-ende.en.de | 29.9 | 0.586 | | newstest2018-ende.en.de | 45.2 | 0.690 | | newstest2019-ende.en.de | 40.9 | 0.654 | | Tatoeba.en.de | 47.3 | 0.664 | ## Citation Information ```bibtex @InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} } ``` ## How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") ```
{"license": "cc-by-4.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-de
null
[ "transformers", "pytorch", "tf", "jax", "rust", "marian", "text2text-generation", "translation", "en", "de", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-dra * source group: English * target group: Dravidian languages * OPUS readme: [eng-dra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md) * model: transformer * source language(s): eng * target language(s): kan mal tam tel * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kan.eng.kan | 4.7 | 0.348 | | Tatoeba-test.eng-mal.eng.mal | 13.1 | 0.515 | | Tatoeba-test.eng.multi | 10.7 | 0.463 | | Tatoeba-test.eng-tam.eng.tam | 9.0 | 0.444 | | Tatoeba-test.eng-tel.eng.tel | 7.1 | 0.363 | ### System Info: - hf_name: eng-dra - source_languages: eng - target_languages: dra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra'] - src_constituents: {'eng'} - tgt_constituents: {'tam', 'kan', 'mal', 'tel'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: dra - short_pair: en-dra - chrF2_score: 0.46299999999999997 - bleu: 10.7 - brevity_penalty: 1.0 - ref_len: 7928.0 - src_name: English - tgt_name: Dravidian languages - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: dra - prefer_old: False - long_pair: eng-dra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "ta", "kn", "ml", "te", "dra"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-dra
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ta", "kn", "ml", "te", "dra", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ee * source languages: en * target languages: ee * OPUS readme: [en-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ee | 38.2 | 0.591 | | Tatoeba.en.ee | 6.0 | 0.347 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ee
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ee", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-efi * source languages: en * target languages: efi * OPUS readme: [en-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-efi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.efi | 38.0 | 0.568 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-efi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "efi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-el * source languages: en * target languages: el * OPUS readme: [en-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-el/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.el | 56.4 | 0.745 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-el
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "el", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-eo * source languages: en * target languages: eo * OPUS readme: [en-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-eo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.eo | 49.5 | 0.682 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-spa * source group: English * target group: Spanish * OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md) * model: transformer * source language(s): eng * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 | | news-test2008-engspa.eng.spa | 29.7 | 0.564 | | newstest2009-engspa.eng.spa | 30.2 | 0.578 | | newstest2010-engspa.eng.spa | 36.9 | 0.620 | | newstest2011-engspa.eng.spa | 38.2 | 0.619 | | newstest2012-engspa.eng.spa | 39.0 | 0.625 | | newstest2013-engspa.eng.spa | 35.0 | 0.598 | | Tatoeba-test.eng.spa | 54.9 | 0.721 | ### System Info: - hf_name: eng-spa - source_languages: eng - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'es'] - src_constituents: {'eng'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt - src_alpha3: eng - tgt_alpha3: spa - short_pair: en-es - chrF2_score: 0.721 - bleu: 54.9 - brevity_penalty: 0.978 - ref_len: 77311.0 - src_name: English - tgt_name: Spanish - train_date: 2020-08-18 00:00:00 - src_alpha2: en - tgt_alpha2: es - prefer_old: False - long_pair: eng-spa - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
{"language": ["en", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-es
null
[ "transformers", "pytorch", "tf", "jax", "marian", "text2text-generation", "translation", "en", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-et * source languages: en * target languages: et * OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2018-enet.en.et | 21.8 | 0.540 | | newstest2018-enet.en.et | 23.3 | 0.556 | | Tatoeba.en.et | 54.0 | 0.717 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-et
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "et", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-eus * source group: English * target group: Basque * OPUS readme: [eng-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md) * model: transformer-align * source language(s): eng * target language(s): eus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.eus | 31.8 | 0.590 | ### System Info: - hf_name: eng-eus - source_languages: eng - target_languages: eus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'eu'] - src_constituents: {'eng'} - tgt_constituents: {'eus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: eus - short_pair: en-eu - chrF2_score: 0.59 - bleu: 31.8 - brevity_penalty: 0.9440000000000001 - ref_len: 7080.0 - src_name: English - tgt_name: Basque - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: eu - prefer_old: False - long_pair: eng-eus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "eu"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-eu
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "eu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-euq * source group: English * target group: Basque (family) * OPUS readme: [eng-euq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md) * model: transformer * source language(s): eng * target language(s): eus * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.eus | 27.9 | 0.555 | | Tatoeba-test.eng-eus.eng.eus | 27.9 | 0.555 | ### System Info: - hf_name: eng-euq - source_languages: eng - target_languages: euq - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'euq'] - src_constituents: {'eng'} - tgt_constituents: {'eus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: euq - short_pair: en-euq - chrF2_score: 0.555 - bleu: 27.9 - brevity_penalty: 0.917 - ref_len: 7080.0 - src_name: English - tgt_name: Basque (family) - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: euq - prefer_old: False - long_pair: eng-euq - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "euq"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-euq
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "euq", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-fi * source languages: en * target languages: fi * OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md) * dataset: opus+bt-news * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip) * test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt) * test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2019-enfi.en.fi | 25.7 | 0.578 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-fiu * source group: English * target group: Finno-Ugrian languages * OPUS readme: [eng-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md) * model: transformer * source language(s): eng * target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-engfin.eng.fin | 18.7 | 0.522 | | newsdev2018-enet-engest.eng.est | 19.4 | 0.521 | | newssyscomb2009-enghun.eng.hun | 15.5 | 0.472 | | newstest2009-enghun.eng.hun | 15.4 | 0.468 | | newstest2015-enfi-engfin.eng.fin | 19.9 | 0.532 | | newstest2016-enfi-engfin.eng.fin | 21.1 | 0.544 | | newstest2017-enfi-engfin.eng.fin | 23.8 | 0.567 | | newstest2018-enet-engest.eng.est | 20.4 | 0.532 | | newstest2018-enfi-engfin.eng.fin | 15.6 | 0.498 | | newstest2019-enfi-engfin.eng.fin | 20.0 | 0.520 | | newstestB2016-enfi-engfin.eng.fin | 17.0 | 0.512 | | newstestB2017-enfi-engfin.eng.fin | 19.7 | 0.531 | | Tatoeba-test.eng-chm.eng.chm | 0.9 | 0.115 | | Tatoeba-test.eng-est.eng.est | 49.8 | 0.689 | | Tatoeba-test.eng-fin.eng.fin | 34.7 | 0.597 | | Tatoeba-test.eng-fkv.eng.fkv | 1.3 | 0.187 | | Tatoeba-test.eng-hun.eng.hun | 35.2 | 0.589 | | Tatoeba-test.eng-izh.eng.izh | 6.0 | 0.163 | | Tatoeba-test.eng-kom.eng.kom | 3.4 | 0.012 | | Tatoeba-test.eng-krl.eng.krl | 6.4 | 0.202 | | Tatoeba-test.eng-liv.eng.liv | 1.6 | 0.102 | | Tatoeba-test.eng-mdf.eng.mdf | 3.7 | 0.008 | | Tatoeba-test.eng.multi | 35.4 | 0.590 | | Tatoeba-test.eng-myv.eng.myv | 1.4 | 0.014 | | Tatoeba-test.eng-sma.eng.sma | 2.6 | 0.097 | | Tatoeba-test.eng-sme.eng.sme | 7.3 | 0.221 | | Tatoeba-test.eng-udm.eng.udm | 1.4 | 0.079 | ### System Info: - hf_name: eng-fiu - source_languages: eng - target_languages: fiu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu'] - src_constituents: {'eng'} - tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: fiu - short_pair: en-fiu - chrF2_score: 0.59 - bleu: 35.4 - brevity_penalty: 0.9440000000000001 - ref_len: 59311.0 - src_name: English - tgt_name: Finno-Ugrian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: fiu - prefer_old: False - long_pair: eng-fiu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "se", "fi", "hu", "et", "fiu"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-fiu
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "se", "fi", "hu", "et", "fiu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-fj * source languages: en * target languages: fj * OPUS readme: [en-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.fj | 34.0 | 0.561 | | Tatoeba.en.fj | 62.5 | 0.781 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-fj
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "fj", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-fr * source languages: en * target languages: fr * OPUS readme: [en-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdiscussdev2015-enfr.en.fr | 33.8 | 0.602 | | newsdiscusstest2015-enfr.en.fr | 40.0 | 0.643 | | newssyscomb2009.en.fr | 29.8 | 0.584 | | news-test2008.en.fr | 27.5 | 0.554 | | newstest2009.en.fr | 29.4 | 0.577 | | newstest2010.en.fr | 32.7 | 0.596 | | newstest2011.en.fr | 34.3 | 0.611 | | newstest2012.en.fr | 31.8 | 0.592 | | newstest2013.en.fr | 33.2 | 0.589 | | Tatoeba.en.fr | 50.5 | 0.672 |
{"license": "apache-2.0", "pipeline_tag": "translation"}
Helsinki-NLP/opus-mt-en-fr
null
[ "transformers", "pytorch", "tf", "jax", "marian", "text2text-generation", "translation", "en", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-gle * source group: English * target group: Irish * OPUS readme: [eng-gle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md) * model: transformer-align * source language(s): eng * target language(s): gle * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.gle | 37.5 | 0.593 | ### System Info: - hf_name: eng-gle - source_languages: eng - target_languages: gle - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ga'] - src_constituents: {'eng'} - tgt_constituents: {'gle'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: gle - short_pair: en-ga - chrF2_score: 0.593 - bleu: 37.5 - brevity_penalty: 1.0 - ref_len: 12200.0 - src_name: English - tgt_name: Irish - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: ga - prefer_old: False - long_pair: eng-gle - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "ga"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ga
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ga", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-gaa * source languages: en * target languages: gaa * OPUS readme: [en-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gaa/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.gaa | 39.9 | 0.593 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-gaa
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gaa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-gem * source group: English * target group: Germanic languages * OPUS readme: [eng-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md) * model: transformer * source language(s): eng * target language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engdeu.eng.deu | 20.9 | 0.521 | | news-test2008-engdeu.eng.deu | 21.1 | 0.511 | | newstest2009-engdeu.eng.deu | 20.5 | 0.516 | | newstest2010-engdeu.eng.deu | 22.5 | 0.526 | | newstest2011-engdeu.eng.deu | 20.5 | 0.508 | | newstest2012-engdeu.eng.deu | 20.8 | 0.507 | | newstest2013-engdeu.eng.deu | 24.6 | 0.534 | | newstest2015-ende-engdeu.eng.deu | 27.9 | 0.569 | | newstest2016-ende-engdeu.eng.deu | 33.2 | 0.607 | | newstest2017-ende-engdeu.eng.deu | 26.5 | 0.560 | | newstest2018-ende-engdeu.eng.deu | 39.4 | 0.648 | | newstest2019-ende-engdeu.eng.deu | 35.0 | 0.613 | | Tatoeba-test.eng-afr.eng.afr | 56.5 | 0.745 | | Tatoeba-test.eng-ang.eng.ang | 6.7 | 0.154 | | Tatoeba-test.eng-dan.eng.dan | 58.0 | 0.726 | | Tatoeba-test.eng-deu.eng.deu | 40.3 | 0.615 | | Tatoeba-test.eng-enm.eng.enm | 1.4 | 0.215 | | Tatoeba-test.eng-fao.eng.fao | 7.2 | 0.304 | | Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.159 | | Tatoeba-test.eng-fry.eng.fry | 19.4 | 0.433 | | Tatoeba-test.eng-gos.eng.gos | 1.0 | 0.182 | | Tatoeba-test.eng-got.eng.got | 0.3 | 0.012 | | Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.130 | | Tatoeba-test.eng-isl.eng.isl | 23.4 | 0.505 | | Tatoeba-test.eng-ksh.eng.ksh | 1.1 | 0.141 | | Tatoeba-test.eng-ltz.eng.ltz | 20.3 | 0.379 | | Tatoeba-test.eng.multi | 46.5 | 0.641 | | Tatoeba-test.eng-nds.eng.nds | 20.6 | 0.458 | | Tatoeba-test.eng-nld.eng.nld | 53.4 | 0.702 | | Tatoeba-test.eng-non.eng.non | 0.6 | 0.166 | | Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.679 | | Tatoeba-test.eng-pdc.eng.pdc | 3.9 | 0.189 | | Tatoeba-test.eng-sco.eng.sco | 33.0 | 0.542 | | Tatoeba-test.eng-stq.eng.stq | 2.3 | 0.274 | | Tatoeba-test.eng-swe.eng.swe | 57.9 | 0.719 | | Tatoeba-test.eng-swg.eng.swg | 1.2 | 0.171 | | Tatoeba-test.eng-yid.eng.yid | 7.2 | 0.304 | ### System Info: - hf_name: eng-gem - source_languages: eng - target_languages: gem - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem'] - src_constituents: {'eng'} - tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gem - short_pair: en-gem - chrF2_score: 0.6409999999999999 - bleu: 46.5 - brevity_penalty: 0.9790000000000001 - ref_len: 73328.0 - src_name: English - tgt_name: Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gem - prefer_old: False - long_pair: eng-gem - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "da", "sv", "af", "nn", "fy", "fo", "de", "nb", "nl", "is", "lb", "yi", "gem"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-gem
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "da", "sv", "af", "nn", "fy", "fo", "de", "nb", "nl", "is", "lb", "yi", "gem", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-gil * source languages: en * target languages: gil * OPUS readme: [en-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.gil | 38.8 | 0.604 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-gil
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gil", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-gl * source languages: en * target languages: gl * OPUS readme: [en-gl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.gl | 36.4 | 0.572 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-gl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-gmq * source group: English * target group: North Germanic languages * OPUS readme: [eng-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md) * model: transformer * source language(s): eng * target language(s): dan fao isl nno nob nob_Hebr non_Latn swe * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-dan.eng.dan | 57.7 | 0.724 | | Tatoeba-test.eng-fao.eng.fao | 9.2 | 0.322 | | Tatoeba-test.eng-isl.eng.isl | 23.8 | 0.506 | | Tatoeba-test.eng.multi | 52.8 | 0.688 | | Tatoeba-test.eng-non.eng.non | 0.7 | 0.196 | | Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.678 | | Tatoeba-test.eng-swe.eng.swe | 57.8 | 0.717 | ### System Info: - hf_name: eng-gmq - source_languages: eng - target_languages: gmq - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq'] - src_constituents: {'eng'} - tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gmq - short_pair: en-gmq - chrF2_score: 0.688 - bleu: 52.8 - brevity_penalty: 0.973 - ref_len: 71881.0 - src_name: English - tgt_name: North Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gmq - prefer_old: False - long_pair: eng-gmq - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "da", "nb", "sv", "is", "nn", "fo", "gmq"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-gmq
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "da", "nb", "sv", "is", "nn", "fo", "gmq", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-gmw * source group: English * target group: West Germanic languages * OPUS readme: [eng-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md) * model: transformer * source language(s): eng * target language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engdeu.eng.deu | 21.4 | 0.518 | | news-test2008-engdeu.eng.deu | 21.0 | 0.510 | | newstest2009-engdeu.eng.deu | 20.4 | 0.513 | | newstest2010-engdeu.eng.deu | 22.9 | 0.528 | | newstest2011-engdeu.eng.deu | 20.5 | 0.508 | | newstest2012-engdeu.eng.deu | 21.0 | 0.507 | | newstest2013-engdeu.eng.deu | 24.7 | 0.533 | | newstest2015-ende-engdeu.eng.deu | 28.2 | 0.568 | | newstest2016-ende-engdeu.eng.deu | 33.3 | 0.605 | | newstest2017-ende-engdeu.eng.deu | 26.5 | 0.559 | | newstest2018-ende-engdeu.eng.deu | 39.9 | 0.649 | | newstest2019-ende-engdeu.eng.deu | 35.9 | 0.616 | | Tatoeba-test.eng-afr.eng.afr | 55.7 | 0.740 | | Tatoeba-test.eng-ang.eng.ang | 6.5 | 0.164 | | Tatoeba-test.eng-deu.eng.deu | 40.4 | 0.614 | | Tatoeba-test.eng-enm.eng.enm | 2.3 | 0.254 | | Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.248 | | Tatoeba-test.eng-fry.eng.fry | 17.9 | 0.424 | | Tatoeba-test.eng-gos.eng.gos | 2.2 | 0.309 | | Tatoeba-test.eng-gsw.eng.gsw | 1.6 | 0.186 | | Tatoeba-test.eng-ksh.eng.ksh | 1.5 | 0.189 | | Tatoeba-test.eng-ltz.eng.ltz | 20.2 | 0.383 | | Tatoeba-test.eng.multi | 41.6 | 0.609 | | Tatoeba-test.eng-nds.eng.nds | 18.9 | 0.437 | | Tatoeba-test.eng-nld.eng.nld | 53.1 | 0.699 | | Tatoeba-test.eng-pdc.eng.pdc | 7.7 | 0.262 | | Tatoeba-test.eng-sco.eng.sco | 37.7 | 0.557 | | Tatoeba-test.eng-stq.eng.stq | 5.9 | 0.380 | | Tatoeba-test.eng-swg.eng.swg | 6.2 | 0.236 | | Tatoeba-test.eng-yid.eng.yid | 6.8 | 0.296 | ### System Info: - hf_name: eng-gmw - source_languages: eng - target_languages: gmw - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw'] - src_constituents: {'eng'} - tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gmw - short_pair: en-gmw - chrF2_score: 0.609 - bleu: 41.6 - brevity_penalty: 0.9890000000000001 - ref_len: 74922.0 - src_name: English - tgt_name: West Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gmw - prefer_old: False - long_pair: eng-gmw - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "nl", "lb", "af", "de", "fy", "yi", "gmw"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-gmw
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "nl", "lb", "af", "de", "fy", "yi", "gmw", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-grk * source group: English * target group: Greek languages * OPUS readme: [eng-grk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md) * model: transformer * source language(s): eng * target language(s): ell grc_Grek * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-ell.eng.ell | 53.8 | 0.723 | | Tatoeba-test.eng-grc.eng.grc | 0.1 | 0.102 | | Tatoeba-test.eng.multi | 45.6 | 0.677 | ### System Info: - hf_name: eng-grk - source_languages: eng - target_languages: grk - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'el', 'grk'] - src_constituents: {'eng'} - tgt_constituents: {'grc_Grek', 'ell'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: grk - short_pair: en-grk - chrF2_score: 0.677 - bleu: 45.6 - brevity_penalty: 1.0 - ref_len: 59951.0 - src_name: English - tgt_name: Greek languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: grk - prefer_old: False - long_pair: eng-grk - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "el", "grk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-grk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "el", "grk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-guw * source languages: en * target languages: guw * OPUS readme: [en-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-guw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.guw | 45.7 | 0.634 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-guw
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "guw", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-gv * source languages: en * target languages: gv * OPUS readme: [en-gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.en.gv | 70.1 | 0.885 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-gv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ha * source languages: en * target languages: ha * OPUS readme: [en-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ha/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ha | 34.1 | 0.544 | | Tatoeba.en.ha | 17.6 | 0.498 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ha
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ha", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-he * source languages: en * target languages: he * OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.he | 40.1 | 0.609 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-he
null
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "en", "he", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-hin * source group: English * target group: Hindi * OPUS readme: [eng-hin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md) * model: transformer-align * source language(s): eng * target language(s): hin * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014.eng.hin | 6.9 | 0.296 | | newstest2014-hien.eng.hin | 9.9 | 0.323 | | Tatoeba-test.eng.hin | 16.1 | 0.447 | ### System Info: - hf_name: eng-hin - source_languages: eng - target_languages: hin - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'hi'] - src_constituents: {'eng'} - tgt_constituents: {'hin'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: hin - short_pair: en-hi - chrF2_score: 0.447 - bleu: 16.1 - brevity_penalty: 1.0 - ref_len: 32904.0 - src_name: English - tgt_name: Hindi - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: hi - prefer_old: False - long_pair: eng-hin - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "hi"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-hi
null
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "en", "hi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-hil * source languages: en * target languages: hil * OPUS readme: [en-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.hil | 49.4 | 0.696 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-hil
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "hil", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ho * source languages: en * target languages: ho * OPUS readme: [en-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ho/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ho | 33.9 | 0.563 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ho
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ho", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ht * source languages: en * target languages: ht * OPUS readme: [en-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ht | 38.3 | 0.545 | | Tatoeba.en.ht | 45.2 | 0.592 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ht
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ht", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-hu * source languages: en * target languages: hu * OPUS readme: [en-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.hu | 40.1 | 0.628 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-hu
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-hye * source group: English * target group: Armenian * OPUS readme: [eng-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md) * model: transformer-align * source language(s): eng * target language(s): hye * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.hye | 16.6 | 0.404 | ### System Info: - hf_name: eng-hye - source_languages: eng - target_languages: hye - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'hy'] - src_constituents: {'eng'} - tgt_constituents: {'hye', 'hye_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt - src_alpha3: eng - tgt_alpha3: hye - short_pair: en-hy - chrF2_score: 0.40399999999999997 - bleu: 16.6 - brevity_penalty: 1.0 - ref_len: 5115.0 - src_name: English - tgt_name: Armenian - train_date: 2020-06-16 - src_alpha2: en - tgt_alpha2: hy - prefer_old: False - long_pair: eng-hye - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "hy"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-hy
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "hy", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-id * source languages: en * target languages: id * OPUS readme: [en-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-id/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.id | 38.3 | 0.636 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-id
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "id", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ig * source languages: en * target languages: ig * OPUS readme: [en-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ig/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ig | 39.5 | 0.546 | | Tatoeba.en.ig | 3.8 | 0.297 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ig
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-iir * source group: English * target group: Indo-Iranian languages * OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.7 | 0.326 | | newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 | | newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 | | newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 | | Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 | | Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 | | Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 | | Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 | | Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 | | Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 | | Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 | | Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 | | Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 | | Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 | | Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 | | Tatoeba-test.eng.multi | 13.7 | 0.392 | | Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 | | Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 | | Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 | | Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 | | Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 | | Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 | | Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 | | Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 | | Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 | | Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 | | Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 | ### System Info: - hf_name: eng-iir - source_languages: eng - target_languages: iir - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: iir - short_pair: en-iir - chrF2_score: 0.392 - bleu: 13.7 - brevity_penalty: 1.0 - ref_len: 63351.0 - src_name: English - tgt_name: Indo-Iranian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: iir - prefer_old: False - long_pair: eng-iir - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-iir
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ilo * source languages: en * target languages: ilo * OPUS readme: [en-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ilo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ilo | 33.2 | 0.584 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ilo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ilo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-inc * source group: English * target group: Indic languages * OPUS readme: [eng-inc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin mai mar npi ori pan_Guru pnb rom san_Deva sin snd_Arab urd * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 8.2 | 0.342 | | newsdev2019-engu-engguj.eng.guj | 6.5 | 0.293 | | newstest2014-hien-enghin.eng.hin | 11.4 | 0.364 | | newstest2019-engu-engguj.eng.guj | 7.2 | 0.296 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.277 | | Tatoeba-test.eng-awa.eng.awa | 0.5 | 0.132 | | Tatoeba-test.eng-ben.eng.ben | 16.7 | 0.470 | | Tatoeba-test.eng-bho.eng.bho | 4.3 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 17.5 | 0.373 | | Tatoeba-test.eng-hif.eng.hif | 0.6 | 0.028 | | Tatoeba-test.eng-hin.eng.hin | 17.7 | 0.469 | | Tatoeba-test.eng-kok.eng.kok | 1.7 | 0.000 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.028 | | Tatoeba-test.eng-mai.eng.mai | 15.6 | 0.429 | | Tatoeba-test.eng-mar.eng.mar | 21.3 | 0.477 | | Tatoeba-test.eng.multi | 17.3 | 0.448 | | Tatoeba-test.eng-nep.eng.nep | 0.8 | 0.081 | | Tatoeba-test.eng-ori.eng.ori | 2.2 | 0.208 | | Tatoeba-test.eng-pan.eng.pan | 8.0 | 0.347 | | Tatoeba-test.eng-rom.eng.rom | 0.4 | 0.197 | | Tatoeba-test.eng-san.eng.san | 0.5 | 0.108 | | Tatoeba-test.eng-sin.eng.sin | 9.1 | 0.364 | | Tatoeba-test.eng-snd.eng.snd | 4.4 | 0.284 | | Tatoeba-test.eng-urd.eng.urd | 13.3 | 0.423 | ### System Info: - hf_name: eng-inc - source_languages: eng - target_languages: inc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: inc - short_pair: en-inc - chrF2_score: 0.44799999999999995 - bleu: 17.3 - brevity_penalty: 1.0 - ref_len: 59917.0 - src_name: English - tgt_name: Indic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: inc - prefer_old: False - long_pair: eng-inc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-inc
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-ine * source group: English * target group: Indo-European languages * OPUS readme: [eng-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md) * model: transformer * source language(s): eng * target language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.2 | 0.317 | | newsdev2016-enro-engron.eng.ron | 22.1 | 0.525 | | newsdev2017-enlv-englav.eng.lav | 17.4 | 0.486 | | newsdev2019-engu-engguj.eng.guj | 6.5 | 0.303 | | newsdev2019-enlt-englit.eng.lit | 14.9 | 0.476 | | newsdiscussdev2015-enfr-engfra.eng.fra | 26.4 | 0.547 | | newsdiscusstest2015-enfr-engfra.eng.fra | 30.0 | 0.575 | | newssyscomb2009-engces.eng.ces | 14.7 | 0.442 | | newssyscomb2009-engdeu.eng.deu | 16.7 | 0.487 | | newssyscomb2009-engfra.eng.fra | 24.8 | 0.547 | | newssyscomb2009-engita.eng.ita | 25.2 | 0.562 | | newssyscomb2009-engspa.eng.spa | 27.0 | 0.554 | | news-test2008-engces.eng.ces | 13.0 | 0.417 | | news-test2008-engdeu.eng.deu | 17.4 | 0.480 | | news-test2008-engfra.eng.fra | 22.3 | 0.519 | | news-test2008-engspa.eng.spa | 24.9 | 0.532 | | newstest2009-engces.eng.ces | 13.6 | 0.432 | | newstest2009-engdeu.eng.deu | 16.6 | 0.482 | | newstest2009-engfra.eng.fra | 23.5 | 0.535 | | newstest2009-engita.eng.ita | 25.5 | 0.561 | | newstest2009-engspa.eng.spa | 26.3 | 0.551 | | newstest2010-engces.eng.ces | 14.2 | 0.436 | | newstest2010-engdeu.eng.deu | 18.3 | 0.492 | | newstest2010-engfra.eng.fra | 25.7 | 0.550 | | newstest2010-engspa.eng.spa | 30.5 | 0.578 | | newstest2011-engces.eng.ces | 15.1 | 0.439 | | newstest2011-engdeu.eng.deu | 17.1 | 0.478 | | newstest2011-engfra.eng.fra | 28.0 | 0.569 | | newstest2011-engspa.eng.spa | 31.9 | 0.580 | | newstest2012-engces.eng.ces | 13.6 | 0.418 | | newstest2012-engdeu.eng.deu | 17.0 | 0.475 | | newstest2012-engfra.eng.fra | 26.1 | 0.553 | | newstest2012-engrus.eng.rus | 21.4 | 0.506 | | newstest2012-engspa.eng.spa | 31.4 | 0.577 | | newstest2013-engces.eng.ces | 15.3 | 0.438 | | newstest2013-engdeu.eng.deu | 20.3 | 0.501 | | newstest2013-engfra.eng.fra | 26.0 | 0.540 | | newstest2013-engrus.eng.rus | 16.1 | 0.449 | | newstest2013-engspa.eng.spa | 28.6 | 0.555 | | newstest2014-hien-enghin.eng.hin | 9.5 | 0.344 | | newstest2015-encs-engces.eng.ces | 14.8 | 0.440 | | newstest2015-ende-engdeu.eng.deu | 22.6 | 0.523 | | newstest2015-enru-engrus.eng.rus | 18.8 | 0.483 | | newstest2016-encs-engces.eng.ces | 16.8 | 0.457 | | newstest2016-ende-engdeu.eng.deu | 26.2 | 0.555 | | newstest2016-enro-engron.eng.ron | 21.2 | 0.510 | | newstest2016-enru-engrus.eng.rus | 17.6 | 0.471 | | newstest2017-encs-engces.eng.ces | 13.6 | 0.421 | | newstest2017-ende-engdeu.eng.deu | 21.5 | 0.516 | | newstest2017-enlv-englav.eng.lav | 13.0 | 0.452 | | newstest2017-enru-engrus.eng.rus | 18.7 | 0.486 | | newstest2018-encs-engces.eng.ces | 13.5 | 0.425 | | newstest2018-ende-engdeu.eng.deu | 29.8 | 0.581 | | newstest2018-enru-engrus.eng.rus | 16.1 | 0.472 | | newstest2019-encs-engces.eng.ces | 14.8 | 0.435 | | newstest2019-ende-engdeu.eng.deu | 26.6 | 0.554 | | newstest2019-engu-engguj.eng.guj | 6.9 | 0.313 | | newstest2019-enlt-englit.eng.lit | 10.6 | 0.429 | | newstest2019-enru-engrus.eng.rus | 17.5 | 0.452 | | Tatoeba-test.eng-afr.eng.afr | 52.1 | 0.708 | | Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.131 | | Tatoeba-test.eng-arg.eng.arg | 1.2 | 0.099 | | Tatoeba-test.eng-asm.eng.asm | 2.9 | 0.259 | | Tatoeba-test.eng-ast.eng.ast | 14.1 | 0.408 | | Tatoeba-test.eng-awa.eng.awa | 0.3 | 0.002 | | Tatoeba-test.eng-bel.eng.bel | 18.1 | 0.450 | | Tatoeba-test.eng-ben.eng.ben | 13.5 | 0.432 | | Tatoeba-test.eng-bho.eng.bho | 0.3 | 0.003 | | Tatoeba-test.eng-bre.eng.bre | 10.4 | 0.318 | | Tatoeba-test.eng-bul.eng.bul | 38.7 | 0.592 | | Tatoeba-test.eng-cat.eng.cat | 42.0 | 0.633 | | Tatoeba-test.eng-ces.eng.ces | 32.3 | 0.546 | | Tatoeba-test.eng-cor.eng.cor | 0.5 | 0.079 | | Tatoeba-test.eng-cos.eng.cos | 3.1 | 0.148 | | Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.216 | | Tatoeba-test.eng-cym.eng.cym | 22.4 | 0.470 | | Tatoeba-test.eng-dan.eng.dan | 49.7 | 0.671 | | Tatoeba-test.eng-deu.eng.deu | 31.7 | 0.554 | | Tatoeba-test.eng-dsb.eng.dsb | 1.1 | 0.139 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.089 | | Tatoeba-test.eng-ell.eng.ell | 42.7 | 0.640 | | Tatoeba-test.eng-enm.eng.enm | 3.5 | 0.259 | | Tatoeba-test.eng-ext.eng.ext | 6.4 | 0.235 | | Tatoeba-test.eng-fao.eng.fao | 6.6 | 0.285 | | Tatoeba-test.eng-fas.eng.fas | 5.7 | 0.257 | | Tatoeba-test.eng-fra.eng.fra | 38.4 | 0.595 | | Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.149 | | Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.145 | | Tatoeba-test.eng-fry.eng.fry | 16.5 | 0.411 | | Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.098 | | Tatoeba-test.eng-gla.eng.gla | 11.6 | 0.361 | | Tatoeba-test.eng-gle.eng.gle | 32.5 | 0.546 | | Tatoeba-test.eng-glg.eng.glg | 38.4 | 0.602 | | Tatoeba-test.eng-glv.eng.glv | 23.1 | 0.418 | | Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.137 | | Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 | | Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 | | Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.108 | | Tatoeba-test.eng-guj.eng.guj | 20.8 | 0.391 | | Tatoeba-test.eng-hat.eng.hat | 34.0 | 0.537 | | Tatoeba-test.eng-hbs.eng.hbs | 33.7 | 0.567 | | Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.269 | | Tatoeba-test.eng-hin.eng.hin | 15.6 | 0.437 | | Tatoeba-test.eng-hsb.eng.hsb | 5.4 | 0.320 | | Tatoeba-test.eng-hye.eng.hye | 17.4 | 0.426 | | Tatoeba-test.eng-isl.eng.isl | 17.4 | 0.436 | | Tatoeba-test.eng-ita.eng.ita | 40.4 | 0.636 | | Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 | | Tatoeba-test.eng-kok.eng.kok | 6.6 | 0.005 | | Tatoeba-test.eng-ksh.eng.ksh | 0.8 | 0.123 | | Tatoeba-test.eng-kur.eng.kur | 10.2 | 0.209 | | Tatoeba-test.eng-lad.eng.lad | 0.8 | 0.163 | | Tatoeba-test.eng-lah.eng.lah | 0.2 | 0.001 | | Tatoeba-test.eng-lat.eng.lat | 9.4 | 0.372 | | Tatoeba-test.eng-lav.eng.lav | 30.3 | 0.559 | | Tatoeba-test.eng-lij.eng.lij | 1.0 | 0.130 | | Tatoeba-test.eng-lit.eng.lit | 25.3 | 0.560 | | Tatoeba-test.eng-lld.eng.lld | 0.4 | 0.139 | | Tatoeba-test.eng-lmo.eng.lmo | 0.6 | 0.108 | | Tatoeba-test.eng-ltz.eng.ltz | 18.1 | 0.388 | | Tatoeba-test.eng-mai.eng.mai | 17.2 | 0.464 | | Tatoeba-test.eng-mar.eng.mar | 18.0 | 0.451 | | Tatoeba-test.eng-mfe.eng.mfe | 81.0 | 0.899 | | Tatoeba-test.eng-mkd.eng.mkd | 37.6 | 0.587 | | Tatoeba-test.eng-msa.eng.msa | 27.7 | 0.519 | | Tatoeba-test.eng.multi | 32.6 | 0.539 | | Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.134 | | Tatoeba-test.eng-nds.eng.nds | 14.3 | 0.401 | | Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.002 | | Tatoeba-test.eng-nld.eng.nld | 44.0 | 0.642 | | Tatoeba-test.eng-non.eng.non | 0.7 | 0.118 | | Tatoeba-test.eng-nor.eng.nor | 42.7 | 0.623 | | Tatoeba-test.eng-oci.eng.oci | 7.2 | 0.295 | | Tatoeba-test.eng-ori.eng.ori | 2.7 | 0.257 | | Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.008 | | Tatoeba-test.eng-oss.eng.oss | 2.9 | 0.264 | | Tatoeba-test.eng-pan.eng.pan | 7.4 | 0.337 | | Tatoeba-test.eng-pap.eng.pap | 48.5 | 0.656 | | Tatoeba-test.eng-pdc.eng.pdc | 1.8 | 0.145 | | Tatoeba-test.eng-pms.eng.pms | 0.7 | 0.136 | | Tatoeba-test.eng-pol.eng.pol | 31.1 | 0.563 | | Tatoeba-test.eng-por.eng.por | 37.0 | 0.605 | | Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.100 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.134 | | Tatoeba-test.eng-roh.eng.roh | 2.3 | 0.236 | | Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.340 | | Tatoeba-test.eng-ron.eng.ron | 34.3 | 0.585 | | Tatoeba-test.eng-rue.eng.rue | 0.2 | 0.010 | | Tatoeba-test.eng-rus.eng.rus | 29.6 | 0.526 | | Tatoeba-test.eng-san.eng.san | 2.4 | 0.125 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.079 | | Tatoeba-test.eng-sco.eng.sco | 33.6 | 0.562 | | Tatoeba-test.eng-sgs.eng.sgs | 3.4 | 0.114 | | Tatoeba-test.eng-sin.eng.sin | 9.2 | 0.349 | | Tatoeba-test.eng-slv.eng.slv | 15.6 | 0.334 | | Tatoeba-test.eng-snd.eng.snd | 9.1 | 0.324 | | Tatoeba-test.eng-spa.eng.spa | 43.4 | 0.645 | | Tatoeba-test.eng-sqi.eng.sqi | 39.0 | 0.621 | | Tatoeba-test.eng-stq.eng.stq | 10.8 | 0.373 | | Tatoeba-test.eng-swe.eng.swe | 49.9 | 0.663 | | Tatoeba-test.eng-swg.eng.swg | 0.7 | 0.137 | | Tatoeba-test.eng-tgk.eng.tgk | 6.4 | 0.346 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.055 | | Tatoeba-test.eng-ukr.eng.ukr | 31.4 | 0.536 | | Tatoeba-test.eng-urd.eng.urd | 11.1 | 0.389 | | Tatoeba-test.eng-vec.eng.vec | 1.3 | 0.110 | | Tatoeba-test.eng-wln.eng.wln | 6.8 | 0.233 | | Tatoeba-test.eng-yid.eng.yid | 5.8 | 0.295 | | Tatoeba-test.eng-zza.eng.zza | 0.8 | 0.086 | ### System Info: - hf_name: eng-ine - source_languages: eng - target_languages: ine - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine'] - src_constituents: {'eng'} - tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: ine - short_pair: en-ine - chrF2_score: 0.539 - bleu: 32.6 - brevity_penalty: 0.973 - ref_len: 68664.0 - src_name: English - tgt_name: Indo-European languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: ine - prefer_old: False - long_pair: eng-ine - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ine
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-is * source languages: en * target languages: is * OPUS readme: [en-is](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-is/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.is | 25.3 | 0.518 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-is
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "is", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-iso * source languages: en * target languages: iso * OPUS readme: [en-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-iso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.iso | 35.7 | 0.523 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-iso
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "iso", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-it * source languages: en * target languages: it * OPUS readme: [en-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-it/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.zip) * test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.test.txt) * test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.it | 30.9 | 0.606 | | newstest2009.en.it | 31.9 | 0.604 | | Tatoeba.en.it | 48.2 | 0.695 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### eng-itc * source group: English * target group: Italic languages * OPUS readme: [eng-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md) * model: transformer * source language(s): eng * target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-enro-engron.eng.ron | 27.1 | 0.565 | | newsdiscussdev2015-enfr-engfra.eng.fra | 29.9 | 0.574 | | newsdiscusstest2015-enfr-engfra.eng.fra | 35.3 | 0.609 | | newssyscomb2009-engfra.eng.fra | 27.7 | 0.567 | | newssyscomb2009-engita.eng.ita | 28.6 | 0.586 | | newssyscomb2009-engspa.eng.spa | 29.8 | 0.569 | | news-test2008-engfra.eng.fra | 25.0 | 0.536 | | news-test2008-engspa.eng.spa | 27.1 | 0.548 | | newstest2009-engfra.eng.fra | 26.7 | 0.557 | | newstest2009-engita.eng.ita | 28.9 | 0.583 | | newstest2009-engspa.eng.spa | 28.9 | 0.567 | | newstest2010-engfra.eng.fra | 29.6 | 0.574 | | newstest2010-engspa.eng.spa | 33.8 | 0.598 | | newstest2011-engfra.eng.fra | 30.9 | 0.590 | | newstest2011-engspa.eng.spa | 34.8 | 0.598 | | newstest2012-engfra.eng.fra | 29.1 | 0.574 | | newstest2012-engspa.eng.spa | 34.9 | 0.600 | | newstest2013-engfra.eng.fra | 30.1 | 0.567 | | newstest2013-engspa.eng.spa | 31.8 | 0.576 | | newstest2016-enro-engron.eng.ron | 25.9 | 0.548 | | Tatoeba-test.eng-arg.eng.arg | 1.6 | 0.120 | | Tatoeba-test.eng-ast.eng.ast | 17.2 | 0.389 | | Tatoeba-test.eng-cat.eng.cat | 47.6 | 0.668 | | Tatoeba-test.eng-cos.eng.cos | 4.3 | 0.287 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.101 | | Tatoeba-test.eng-ext.eng.ext | 8.7 | 0.287 | | Tatoeba-test.eng-fra.eng.fra | 44.9 | 0.635 | | Tatoeba-test.eng-frm.eng.frm | 1.0 | 0.225 | | Tatoeba-test.eng-gcf.eng.gcf | 0.7 | 0.115 | | Tatoeba-test.eng-glg.eng.glg | 44.9 | 0.648 | | Tatoeba-test.eng-hat.eng.hat | 30.9 | 0.533 | | Tatoeba-test.eng-ita.eng.ita | 45.4 | 0.673 | | Tatoeba-test.eng-lad.eng.lad | 5.6 | 0.279 | | Tatoeba-test.eng-lat.eng.lat | 12.1 | 0.380 | | Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.183 | | Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.199 | | Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.187 | | Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.909 | | Tatoeba-test.eng-msa.eng.msa | 31.3 | 0.549 | | Tatoeba-test.eng.multi | 38.0 | 0.588 | | Tatoeba-test.eng-mwl.eng.mwl | 2.7 | 0.322 | | Tatoeba-test.eng-oci.eng.oci | 8.2 | 0.293 | | Tatoeba-test.eng-pap.eng.pap | 46.7 | 0.663 | | Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.194 | | Tatoeba-test.eng-por.eng.por | 41.2 | 0.635 | | Tatoeba-test.eng-roh.eng.roh | 2.6 | 0.237 | | Tatoeba-test.eng-ron.eng.ron | 40.6 | 0.632 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.181 | | Tatoeba-test.eng-spa.eng.spa | 49.5 | 0.685 | | Tatoeba-test.eng-vec.eng.vec | 1.6 | 0.223 | | Tatoeba-test.eng-wln.eng.wln | 7.1 | 0.250 | ### System Info: - hf_name: eng-itc - source_languages: eng - target_languages: itc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc'] - src_constituents: {'eng'} - tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: itc - short_pair: en-itc - chrF2_score: 0.588 - bleu: 38.0 - brevity_penalty: 0.9670000000000001 - ref_len: 73951.0 - src_name: English - tgt_name: Italic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: itc - prefer_old: False - long_pair: eng-itc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["en", "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-itc
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-jap * source languages: en * target languages: jap * OPUS readme: [en-jap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-jap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.en.jap | 42.1 | 0.960 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-jap
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "jap", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-kg * source languages: en * target languages: kg * OPUS readme: [en-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kg | 39.6 | 0.613 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-kg
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-kj * source languages: en * target languages: kj * OPUS readme: [en-kj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kj | 29.6 | 0.539 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-kj
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kj", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-kqn * source languages: en * target languages: kqn * OPUS readme: [en-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kqn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kqn | 33.1 | 0.567 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-kqn
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kqn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-kwn * source languages: en * target languages: kwn * OPUS readme: [en-kwn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kwn | 27.6 | 0.513 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-kwn
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kwn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-kwy * source languages: en * target languages: kwy * OPUS readme: [en-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwy/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kwy | 33.6 | 0.543 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-kwy
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kwy", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-lg * source languages: en * target languages: lg * OPUS readme: [en-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lg | 30.4 | 0.543 | | Tatoeba.en.lg | 5.7 | 0.386 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-lg
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-ln * source languages: en * target languages: ln * OPUS readme: [en-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ln/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ln | 36.7 | 0.588 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-ln
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ln", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-en-loz * source languages: en * target languages: loz * OPUS readme: [en-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-loz/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.loz | 40.1 | 0.596 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-en-loz
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "loz", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00