modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-sla-sla
|
Helsinki-NLP
| 2023-08-16T12:04:14Z | 141 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- be
- hr
- mk
- cs
- ru
- pl
- bg
- uk
- sl
- sla
tags:
- translation
license: apache-2.0
---
### sla-sla
* source group: Slavic languages
* target group: Slavic languages
* OPUS readme: [sla-sla](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md)
* model: transformer
* source language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* target language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-cesrus.ces.rus | 15.9 | 0.437 |
| newstest2012-rusces.rus.ces | 13.6 | 0.403 |
| newstest2013-cesrus.ces.rus | 19.8 | 0.473 |
| newstest2013-rusces.rus.ces | 17.9 | 0.449 |
| Tatoeba-test.bel-bul.bel.bul | 100.0 | 1.000 |
| Tatoeba-test.bel-ces.bel.ces | 33.5 | 0.630 |
| Tatoeba-test.bel-hbs.bel.hbs | 45.4 | 0.644 |
| Tatoeba-test.bel-mkd.bel.mkd | 19.3 | 0.531 |
| Tatoeba-test.bel-pol.bel.pol | 46.9 | 0.681 |
| Tatoeba-test.bel-rus.bel.rus | 58.5 | 0.767 |
| Tatoeba-test.bel-ukr.bel.ukr | 55.1 | 0.743 |
| Tatoeba-test.bul-bel.bul.bel | 10.7 | 0.423 |
| Tatoeba-test.bul-ces.bul.ces | 36.9 | 0.585 |
| Tatoeba-test.bul-hbs.bul.hbs | 53.7 | 0.807 |
| Tatoeba-test.bul-mkd.bul.mkd | 31.9 | 0.715 |
| Tatoeba-test.bul-pol.bul.pol | 38.6 | 0.607 |
| Tatoeba-test.bul-rus.bul.rus | 44.8 | 0.655 |
| Tatoeba-test.bul-ukr.bul.ukr | 49.9 | 0.691 |
| Tatoeba-test.ces-bel.ces.bel | 30.9 | 0.585 |
| Tatoeba-test.ces-bul.ces.bul | 75.8 | 0.859 |
| Tatoeba-test.ces-hbs.ces.hbs | 50.0 | 0.661 |
| Tatoeba-test.ces-hsb.ces.hsb | 7.9 | 0.246 |
| Tatoeba-test.ces-mkd.ces.mkd | 24.6 | 0.569 |
| Tatoeba-test.ces-pol.ces.pol | 44.3 | 0.652 |
| Tatoeba-test.ces-rus.ces.rus | 50.8 | 0.690 |
| Tatoeba-test.ces-slv.ces.slv | 4.9 | 0.240 |
| Tatoeba-test.ces-ukr.ces.ukr | 52.9 | 0.687 |
| Tatoeba-test.dsb-pol.dsb.pol | 16.3 | 0.367 |
| Tatoeba-test.dsb-rus.dsb.rus | 12.7 | 0.245 |
| Tatoeba-test.hbs-bel.hbs.bel | 32.9 | 0.531 |
| Tatoeba-test.hbs-bul.hbs.bul | 100.0 | 1.000 |
| Tatoeba-test.hbs-ces.hbs.ces | 40.3 | 0.626 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 19.3 | 0.535 |
| Tatoeba-test.hbs-pol.hbs.pol | 45.0 | 0.650 |
| Tatoeba-test.hbs-rus.hbs.rus | 53.5 | 0.709 |
| Tatoeba-test.hbs-ukr.hbs.ukr | 50.7 | 0.684 |
| Tatoeba-test.hsb-ces.hsb.ces | 17.9 | 0.366 |
| Tatoeba-test.mkd-bel.mkd.bel | 23.6 | 0.548 |
| Tatoeba-test.mkd-bul.mkd.bul | 54.2 | 0.833 |
| Tatoeba-test.mkd-ces.mkd.ces | 12.1 | 0.371 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 19.3 | 0.577 |
| Tatoeba-test.mkd-pol.mkd.pol | 53.7 | 0.833 |
| Tatoeba-test.mkd-rus.mkd.rus | 34.2 | 0.745 |
| Tatoeba-test.mkd-ukr.mkd.ukr | 42.7 | 0.708 |
| Tatoeba-test.multi.multi | 48.5 | 0.672 |
| Tatoeba-test.orv-pol.orv.pol | 10.1 | 0.355 |
| Tatoeba-test.orv-rus.orv.rus | 10.6 | 0.275 |
| Tatoeba-test.orv-ukr.orv.ukr | 7.5 | 0.230 |
| Tatoeba-test.pol-bel.pol.bel | 29.8 | 0.533 |
| Tatoeba-test.pol-bul.pol.bul | 36.8 | 0.578 |
| Tatoeba-test.pol-ces.pol.ces | 43.6 | 0.626 |
| Tatoeba-test.pol-dsb.pol.dsb | 0.9 | 0.097 |
| Tatoeba-test.pol-hbs.pol.hbs | 42.4 | 0.644 |
| Tatoeba-test.pol-mkd.pol.mkd | 19.3 | 0.535 |
| Tatoeba-test.pol-orv.pol.orv | 0.7 | 0.109 |
| Tatoeba-test.pol-rus.pol.rus | 49.6 | 0.680 |
| Tatoeba-test.pol-slv.pol.slv | 7.3 | 0.262 |
| Tatoeba-test.pol-ukr.pol.ukr | 46.8 | 0.664 |
| Tatoeba-test.rus-bel.rus.bel | 34.4 | 0.577 |
| Tatoeba-test.rus-bul.rus.bul | 45.5 | 0.657 |
| Tatoeba-test.rus-ces.rus.ces | 48.0 | 0.659 |
| Tatoeba-test.rus-dsb.rus.dsb | 10.7 | 0.029 |
| Tatoeba-test.rus-hbs.rus.hbs | 44.6 | 0.655 |
| Tatoeba-test.rus-mkd.rus.mkd | 34.9 | 0.617 |
| Tatoeba-test.rus-orv.rus.orv | 0.1 | 0.073 |
| Tatoeba-test.rus-pol.rus.pol | 45.2 | 0.659 |
| Tatoeba-test.rus-slv.rus.slv | 30.4 | 0.476 |
| Tatoeba-test.rus-ukr.rus.ukr | 57.6 | 0.751 |
| Tatoeba-test.slv-ces.slv.ces | 42.5 | 0.604 |
| Tatoeba-test.slv-pol.slv.pol | 39.6 | 0.601 |
| Tatoeba-test.slv-rus.slv.rus | 47.2 | 0.638 |
| Tatoeba-test.slv-ukr.slv.ukr | 36.4 | 0.549 |
| Tatoeba-test.ukr-bel.ukr.bel | 36.9 | 0.597 |
| Tatoeba-test.ukr-bul.ukr.bul | 56.4 | 0.733 |
| Tatoeba-test.ukr-ces.ukr.ces | 52.1 | 0.686 |
| Tatoeba-test.ukr-hbs.ukr.hbs | 47.1 | 0.670 |
| Tatoeba-test.ukr-mkd.ukr.mkd | 20.8 | 0.548 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.2 | 0.058 |
| Tatoeba-test.ukr-pol.ukr.pol | 50.1 | 0.695 |
| Tatoeba-test.ukr-rus.ukr.rus | 63.9 | 0.790 |
| Tatoeba-test.ukr-slv.ukr.slv | 14.5 | 0.288 |
### System Info:
- hf_name: sla-sla
- source_languages: sla
- target_languages: sla
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
- src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- tgt_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt
- src_alpha3: sla
- tgt_alpha3: sla
- short_pair: sla-sla
- chrF2_score: 0.672
- bleu: 48.5
- brevity_penalty: 1.0
- ref_len: 59320.0
- src_name: Slavic languages
- tgt_name: Slavic languages
- train_date: 2020-07-27
- src_alpha2: sla
- tgt_alpha2: sla
- prefer_old: False
- long_pair: sla-sla
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-sl-uk
|
Helsinki-NLP
| 2023-08-16T12:04:12Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- sl
- uk
tags:
- translation
license: apache-2.0
---
### slv-ukr
* source group: Slovenian
* target group: Ukrainian
* OPUS readme: [slv-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-ukr/README.md)
* model: transformer-align
* source language(s): slv
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.slv.ukr | 10.6 | 0.236 |
### System Info:
- hf_name: slv-ukr
- source_languages: slv
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sl', 'uk']
- src_constituents: {'slv'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.test.txt
- src_alpha3: slv
- tgt_alpha3: ukr
- short_pair: sl-uk
- chrF2_score: 0.23600000000000002
- bleu: 10.6
- brevity_penalty: 1.0
- ref_len: 3906.0
- src_name: Slovenian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: sl
- tgt_alpha2: uk
- prefer_old: False
- long_pair: slv-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-sl-sv
|
Helsinki-NLP
| 2023-08-16T12:04:11Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-sl-sv
* source languages: sl
* target languages: sv
* OPUS readme: [sl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sl.sv | 27.8 | 0.509 |
|
Helsinki-NLP/opus-mt-sl-ru
|
Helsinki-NLP
| 2023-08-16T12:04:10Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- sl
- ru
tags:
- translation
license: apache-2.0
---
### slv-rus
* source group: Slovenian
* target group: Russian
* OPUS readme: [slv-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-rus/README.md)
* model: transformer-align
* source language(s): slv
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.slv.rus | 37.3 | 0.504 |
### System Info:
- hf_name: slv-rus
- source_languages: slv
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sl', 'ru']
- src_constituents: {'slv'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.test.txt
- src_alpha3: slv
- tgt_alpha3: rus
- short_pair: sl-ru
- chrF2_score: 0.504
- bleu: 37.3
- brevity_penalty: 0.988
- ref_len: 2101.0
- src_name: Slovenian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: sl
- tgt_alpha2: ru
- prefer_old: False
- long_pair: slv-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-sk-sv
|
Helsinki-NLP
| 2023-08-16T12:04:05Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-sk-sv
* source languages: sk
* target languages: sv
* OPUS readme: [sk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.sv | 33.1 | 0.544 |
|
Helsinki-NLP/opus-mt-sk-fr
|
Helsinki-NLP
| 2023-08-16T12:04:04Z | 494 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-sk-fr
* source languages: sk
* target languages: fr
* OPUS readme: [sk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.fr | 29.4 | 0.508 |
|
Helsinki-NLP/opus-mt-sk-fi
|
Helsinki-NLP
| 2023-08-16T12:04:03Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-sk-fi
* source languages: sk
* target languages: fi
* OPUS readme: [sk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.fi | 27.6 | 0.544 |
|
Helsinki-NLP/opus-mt-sg-sv
|
Helsinki-NLP
| 2023-08-16T12:03:57Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sg",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-sg-sv
* source languages: sg
* target languages: sv
* OPUS readme: [sg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.sv | 25.3 | 0.428 |
|
Helsinki-NLP/opus-mt-sg-fr
|
Helsinki-NLP
| 2023-08-16T12:03:55Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sg",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-sg-fr
* source languages: sg
* target languages: fr
* OPUS readme: [sg-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.fr | 24.9 | 0.420 |
|
Helsinki-NLP/opus-mt-sem-en
|
Helsinki-NLP
| 2023-08-16T12:03:50Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"ar",
"he",
"ti",
"am",
"sem",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- mt
- ar
- he
- ti
- am
- sem
- en
tags:
- translation
license: apache-2.0
---
### sem-eng
* source group: Semitic languages
* target group: English
* OPUS readme: [sem-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-eng/README.md)
* model: transformer
* source language(s): acm afb amh apc ara arq ary arz heb mlt tir
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.amh-eng.amh.eng | 37.5 | 0.565 |
| Tatoeba-test.ara-eng.ara.eng | 38.9 | 0.566 |
| Tatoeba-test.heb-eng.heb.eng | 44.6 | 0.610 |
| Tatoeba-test.mlt-eng.mlt.eng | 53.7 | 0.688 |
| Tatoeba-test.multi.eng | 41.7 | 0.588 |
| Tatoeba-test.tir-eng.tir.eng | 18.3 | 0.370 |
### System Info:
- hf_name: sem-eng
- source_languages: sem
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem', 'en']
- src_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.test.txt
- src_alpha3: sem
- tgt_alpha3: eng
- short_pair: sem-en
- chrF2_score: 0.588
- bleu: 41.7
- brevity_penalty: 0.987
- ref_len: 72950.0
- src_name: Semitic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: sem
- tgt_alpha2: en
- prefer_old: False
- long_pair: sem-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-sal-en
|
Helsinki-NLP
| 2023-08-16T12:03:48Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sal",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- sal
- en
tags:
- translation
license: apache-2.0
---
### sal-eng
* source group: Salishan languages
* target group: English
* OPUS readme: [sal-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sal-eng/README.md)
* model: transformer
* source language(s): shs_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.zip)
* test set translations: [opus-2020-07-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.test.txt)
* test set scores: [opus-2020-07-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.multi.eng | 38.7 | 0.572 |
| Tatoeba-test.shs.eng | 2.2 | 0.097 |
| Tatoeba-test.shs-eng.shs.eng | 2.2 | 0.097 |
### System Info:
- hf_name: sal-eng
- source_languages: sal
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sal-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sal', 'en']
- src_constituents: {'shs_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.test.txt
- src_alpha3: sal
- tgt_alpha3: eng
- short_pair: sal-en
- chrF2_score: 0.09699999999999999
- bleu: 2.2
- brevity_penalty: 0.8190000000000001
- ref_len: 222.0
- src_name: Salishan languages
- tgt_name: English
- train_date: 2020-07-14
- src_alpha2: sal
- tgt_alpha2: en
- prefer_old: False
- long_pair: sal-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rw-sv
|
Helsinki-NLP
| 2023-08-16T12:03:47Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rw",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-rw-sv
* source languages: rw
* target languages: sv
* OPUS readme: [rw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rw-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rw-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rw.sv | 29.1 | 0.476 |
|
Helsinki-NLP/opus-mt-run-sv
|
Helsinki-NLP
| 2023-08-16T12:03:42Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"run",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-run-sv
* source languages: run
* target languages: sv
* OPUS readme: [run-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.sv | 30.1 | 0.484 |
|
Helsinki-NLP/opus-mt-ru-sv
|
Helsinki-NLP
| 2023-08-16T12:03:36Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ru
- sv
tags:
- translation
license: apache-2.0
---
### rus-swe
* source group: Russian
* target group: Swedish
* OPUS readme: [rus-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-swe/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.swe | 51.9 | 0.677 |
### System Info:
- hf_name: rus-swe
- source_languages: rus
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'sv']
- src_constituents: {'rus'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: swe
- short_pair: ru-sv
- chrF2_score: 0.677
- bleu: 51.9
- brevity_penalty: 0.968
- ref_len: 8449.0
- src_name: Russian
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: sv
- prefer_old: False
- long_pair: rus-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-no
|
Helsinki-NLP
| 2023-08-16T12:03:34Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ru
- no
tags:
- translation
license: apache-2.0
---
### rus-nor
* source group: Russian
* target group: Norwegian
* OPUS readme: [rus-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.nor | 20.3 | 0.418 |
### System Info:
- hf_name: rus-nor
- source_languages: rus
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'no']
- src_constituents: {'rus'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: nor
- short_pair: ru-no
- chrF2_score: 0.418
- bleu: 20.3
- brevity_penalty: 0.946
- ref_len: 11686.0
- src_name: Russian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: no
- prefer_old: False
- long_pair: rus-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-lv
|
Helsinki-NLP
| 2023-08-16T12:03:33Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"lv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ru
- lv
tags:
- translation
license: apache-2.0
---
### rus-lav
* source group: Russian
* target group: Latvian
* OPUS readme: [rus-lav](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lav/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lav
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lav | 50.0 | 0.696 |
### System Info:
- hf_name: rus-lav
- source_languages: rus
- target_languages: lav
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lav/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lv']
- src_constituents: {'rus'}
- tgt_constituents: {'lav'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lav
- short_pair: ru-lv
- chrF2_score: 0.696
- bleu: 50.0
- brevity_penalty: 0.968
- ref_len: 1518.0
- src_name: Russian
- tgt_name: Latvian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lv
- prefer_old: False
- long_pair: rus-lav
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-lt
|
Helsinki-NLP
| 2023-08-16T12:03:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"lt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ru
- lt
tags:
- translation
license: apache-2.0
---
### rus-lit
* source group: Russian
* target group: Lithuanian
* OPUS readme: [rus-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lit | 43.5 | 0.675 |
### System Info:
- hf_name: rus-lit
- source_languages: rus
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lt']
- src_constituents: {'rus'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lit
- short_pair: ru-lt
- chrF2_score: 0.675
- bleu: 43.5
- brevity_penalty: 0.937
- ref_len: 14406.0
- src_name: Russian
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lt
- prefer_old: False
- long_pair: rus-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-he
|
Helsinki-NLP
| 2023-08-16T12:03:29Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ru
- he
tags:
- translation
license: apache-2.0
---
### ru-he
* source group: Russian
* target group: Hebrew
* OPUS readme: [rus-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md)
* model: transformer
* source language(s): rus
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.heb | 36.1 | 0.569 |
### System Info:
- hf_name: ru-he
- source_languages: rus
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'he']
- src_constituents: ('Russian', {'rus'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: rus-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt
- src_alpha3: rus
- tgt_alpha3: heb
- chrF2_score: 0.569
- bleu: 36.1
- brevity_penalty: 0.9990000000000001
- ref_len: 15028.0
- src_name: Russian
- tgt_name: Hebrew
- train_date: 2020-10-04 00:00:00
- src_alpha2: ru
- tgt_alpha2: he
- prefer_old: False
- short_pair: ru-he
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
- port_machine: LM0-400-22516.local
- port_time: 2020-10-26-16:16
|
Helsinki-NLP/opus-mt-ru-eo
|
Helsinki-NLP
| 2023-08-16T12:03:23Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ru
- eo
tags:
- translation
license: apache-2.0
---
### rus-epo
* source group: Russian
* target group: Esperanto
* OPUS readme: [rus-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.epo | 24.2 | 0.436 |
### System Info:
- hf_name: rus-epo
- source_languages: rus
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'eo']
- src_constituents: {'rus'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: epo
- short_pair: ru-eo
- chrF2_score: 0.436
- bleu: 24.2
- brevity_penalty: 0.925
- ref_len: 77197.0
- src_name: Russian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: eo
- prefer_old: False
- long_pair: rus-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-ar
|
Helsinki-NLP
| 2023-08-16T12:03:18Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ru
- ar
tags:
- translation
license: apache-2.0
---
### rus-ara
* source group: Russian
* target group: Arabic
* OPUS readme: [rus-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md)
* model: transformer
* source language(s): rus
* target language(s): apc ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.ara | 16.6 | 0.486 |
### System Info:
- hf_name: rus-ara
- source_languages: rus
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'ar']
- src_constituents: {'rus'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt
- src_alpha3: rus
- tgt_alpha3: ara
- short_pair: ru-ar
- chrF2_score: 0.486
- bleu: 16.6
- brevity_penalty: 0.9690000000000001
- ref_len: 18878.0
- src_name: Russian
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: ru
- tgt_alpha2: ar
- prefer_old: False
- long_pair: rus-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ro-sv
|
Helsinki-NLP
| 2023-08-16T12:03:15Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ro",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ro-sv
* source languages: ro
* target languages: sv
* OPUS readme: [ro-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.sv | 31.2 | 0.529 |
|
Helsinki-NLP/opus-mt-ro-fi
|
Helsinki-NLP
| 2023-08-16T12:03:12Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ro",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ro-fi
* source languages: ro
* target languages: fi
* OPUS readme: [ro-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.fi | 25.2 | 0.521 |
|
Helsinki-NLP/opus-mt-rn-ru
|
Helsinki-NLP
| 2023-08-16T12:03:06Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rn",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- rn
- ru
tags:
- translation
license: apache-2.0
---
### run-rus
* source group: Rundi
* target group: Russian
* OPUS readme: [run-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-rus/README.md)
* model: transformer-align
* source language(s): run
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.rus | 17.1 | 0.321 |
### System Info:
- hf_name: run-rus
- source_languages: run
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'ru']
- src_constituents: {'run'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: rus
- short_pair: rn-ru
- chrF2_score: 0.321
- bleu: 17.1
- brevity_penalty: 1.0
- ref_len: 6635.0
- src_name: Rundi
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: ru
- prefer_old: False
- long_pair: run-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rn-fr
|
Helsinki-NLP
| 2023-08-16T12:03:05Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rn",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- rn
- fr
tags:
- translation
license: apache-2.0
---
### run-fra
* source group: Rundi
* target group: French
* OPUS readme: [run-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md)
* model: transformer-align
* source language(s): run
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.fra | 18.2 | 0.397 |
### System Info:
- hf_name: run-fra
- source_languages: run
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'fr']
- src_constituents: {'run'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: fra
- short_pair: rn-fr
- chrF2_score: 0.397
- bleu: 18.2
- brevity_penalty: 1.0
- ref_len: 7496.0
- src_name: Rundi
- tgt_name: French
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: fr
- prefer_old: False
- long_pair: run-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rn-es
|
Helsinki-NLP
| 2023-08-16T12:03:03Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rn",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- rn
- es
tags:
- translation
license: apache-2.0
---
### run-spa
* source group: Rundi
* target group: Spanish
* OPUS readme: [run-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-spa/README.md)
* model: transformer-align
* source language(s): run
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.spa | 14.4 | 0.376 |
### System Info:
- hf_name: run-spa
- source_languages: run
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'es']
- src_constituents: {'run'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: spa
- short_pair: rn-es
- chrF2_score: 0.376
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 5167.0
- src_name: Rundi
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: es
- prefer_old: False
- long_pair: run-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pt-uk
|
Helsinki-NLP
| 2023-08-16T12:03:00Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- pt
- uk
tags:
- translation
license: apache-2.0
---
### por-ukr
* source group: Portuguese
* target group: Ukrainian
* OPUS readme: [por-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md)
* model: transformer-align
* source language(s): por
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.ukr | 39.8 | 0.616 |
### System Info:
- hf_name: por-ukr
- source_languages: por
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'uk']
- src_constituents: {'por'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: ukr
- short_pair: pt-uk
- chrF2_score: 0.616
- bleu: 39.8
- brevity_penalty: 0.9990000000000001
- ref_len: 18933.0
- src_name: Portuguese
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: uk
- prefer_old: False
- long_pair: por-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pt-tl
|
Helsinki-NLP
| 2023-08-16T12:02:58Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"tl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- pt
- tl
tags:
- translation
license: apache-2.0
---
### por-tgl
* source group: Portuguese
* target group: Tagalog
* OPUS readme: [por-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md)
* model: transformer-align
* source language(s): por
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.tgl | 28.4 | 0.565 |
### System Info:
- hf_name: por-tgl
- source_languages: por
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'tl']
- src_constituents: {'por'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: tgl
- short_pair: pt-tl
- chrF2_score: 0.565
- bleu: 28.4
- brevity_penalty: 1.0
- ref_len: 13620.0
- src_name: Portuguese
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: tl
- prefer_old: False
- long_pair: por-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pt-ca
|
Helsinki-NLP
| 2023-08-16T12:02:55Z | 176 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- pt
- ca
tags:
- translation
license: apache-2.0
---
### por-cat
* source group: Portuguese
* target group: Catalan
* OPUS readme: [por-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-cat/README.md)
* model: transformer-align
* source language(s): por
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.cat | 45.7 | 0.672 |
### System Info:
- hf_name: por-cat
- source_languages: por
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'ca']
- src_constituents: {'por'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: cat
- short_pair: pt-ca
- chrF2_score: 0.672
- bleu: 45.7
- brevity_penalty: 0.972
- ref_len: 5878.0
- src_name: Portuguese
- tgt_name: Catalan
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: ca
- prefer_old: False
- long_pair: por-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pqe-en
|
Helsinki-NLP
| 2023-08-16T12:02:53Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fj",
"mi",
"ty",
"to",
"na",
"sm",
"mh",
"pqe",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- fj
- mi
- ty
- to
- na
- sm
- mh
- pqe
- en
tags:
- translation
license: apache-2.0
---
### pqe-eng
* source group: Eastern Malayo-Polynesian languages
* target group: English
* OPUS readme: [pqe-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md)
* model: transformer
* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip)
* test set translations: [opus-2020-06-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt)
* test set scores: [opus-2020-06-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fij-eng.fij.eng | 26.9 | 0.361 |
| Tatoeba-test.gil-eng.gil.eng | 49.0 | 0.618 |
| Tatoeba-test.haw-eng.haw.eng | 1.6 | 0.126 |
| Tatoeba-test.mah-eng.mah.eng | 13.7 | 0.257 |
| Tatoeba-test.mri-eng.mri.eng | 7.4 | 0.250 |
| Tatoeba-test.multi.eng | 12.6 | 0.268 |
| Tatoeba-test.nau-eng.nau.eng | 2.3 | 0.125 |
| Tatoeba-test.niu-eng.niu.eng | 34.4 | 0.471 |
| Tatoeba-test.rap-eng.rap.eng | 10.3 | 0.215 |
| Tatoeba-test.smo-eng.smo.eng | 28.5 | 0.413 |
| Tatoeba-test.tah-eng.tah.eng | 12.1 | 0.199 |
| Tatoeba-test.ton-eng.ton.eng | 41.8 | 0.517 |
| Tatoeba-test.tvl-eng.tvl.eng | 42.9 | 0.540 |
### System Info:
- hf_name: pqe-eng
- source_languages: pqe
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']
- src_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt
- src_alpha3: pqe
- tgt_alpha3: eng
- short_pair: pqe-en
- chrF2_score: 0.268
- bleu: 12.6
- brevity_penalty: 1.0
- ref_len: 4568.0
- src_name: Eastern Malayo-Polynesian languages
- tgt_name: English
- train_date: 2020-06-28
- src_alpha2: pqe
- tgt_alpha2: en
- prefer_old: False
- long_pair: pqe-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pon-en
|
Helsinki-NLP
| 2023-08-16T12:02:47Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pon",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pon-en
* source languages: pon
* target languages: en
* OPUS readme: [pon-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.en | 34.1 | 0.489 |
|
Helsinki-NLP/opus-mt-pl-es
|
Helsinki-NLP
| 2023-08-16T12:02:40Z | 1,773 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pl-es
* source languages: pl
* target languages: es
* OPUS readme: [pl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.es | 46.9 | 0.654 |
|
Helsinki-NLP/opus-mt-pl-en
|
Helsinki-NLP
| 2023-08-16T12:02:38Z | 129,245 | 22 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pl-en
* source languages: pl
* target languages: en
* OPUS readme: [pl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.en | 54.9 | 0.701 |
|
Helsinki-NLP/opus-mt-pis-fi
|
Helsinki-NLP
| 2023-08-16T12:02:32Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pis",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pis-fi
* source languages: pis
* target languages: fi
* OPUS readme: [pis-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fi | 21.8 | 0.439 |
|
Helsinki-NLP/opus-mt-pis-en
|
Helsinki-NLP
| 2023-08-16T12:02:30Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pis",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pis-en
* source languages: pis
* target languages: en
* OPUS readme: [pis-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.en | 33.3 | 0.493 |
|
Helsinki-NLP/opus-mt-pap-fi
|
Helsinki-NLP
| 2023-08-16T12:02:26Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pap-fi
* source languages: pap
* target languages: fi
* OPUS readme: [pap-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.fi | 27.7 | 0.520 |
|
Helsinki-NLP/opus-mt-pap-en
|
Helsinki-NLP
| 2023-08-16T12:02:23Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pap-en
* source languages: pap
* target languages: en
* OPUS readme: [pap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.en | 47.3 | 0.634 |
| Tatoeba.pap.en | 63.2 | 0.684 |
|
Helsinki-NLP/opus-mt-pap-de
|
Helsinki-NLP
| 2023-08-16T12:02:22Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pap-de
* source languages: pap
* target languages: de
* OPUS readme: [pap-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.de | 25.0 | 0.466 |
|
Helsinki-NLP/opus-mt-pag-sv
|
Helsinki-NLP
| 2023-08-16T12:02:21Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pag-sv
* source languages: pag
* target languages: sv
* OPUS readme: [pag-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.sv | 29.8 | 0.492 |
|
Helsinki-NLP/opus-mt-pag-fi
|
Helsinki-NLP
| 2023-08-16T12:02:20Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pag-fi
* source languages: pag
* target languages: fi
* OPUS readme: [pag-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.fi | 26.7 | 0.496 |
|
Helsinki-NLP/opus-mt-pag-en
|
Helsinki-NLP
| 2023-08-16T12:02:18Z | 115 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pag-en
* source languages: pag
* target languages: en
* OPUS readme: [pag-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.en | 42.4 | 0.580 |
|
Helsinki-NLP/opus-mt-om-en
|
Helsinki-NLP
| 2023-08-16T12:02:14Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"om",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-om-en
* source languages: om
* target languages: en
* OPUS readme: [om-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/om-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.om.en | 27.3 | 0.448 |
|
Helsinki-NLP/opus-mt-nyk-en
|
Helsinki-NLP
| 2023-08-16T12:02:13Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nyk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-nyk-en
* source languages: nyk
* target languages: en
* OPUS readme: [nyk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nyk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nyk.en | 27.3 | 0.423 |
|
Helsinki-NLP/opus-mt-ny-en
|
Helsinki-NLP
| 2023-08-16T12:02:10Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ny-en
* source languages: ny
* target languages: en
* OPUS readme: [ny-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.en | 39.7 | 0.547 |
| Tatoeba.ny.en | 44.2 | 0.562 |
|
Helsinki-NLP/opus-mt-nso-sv
|
Helsinki-NLP
| 2023-08-16T12:02:08Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-sv
* source languages: nso
* target languages: sv
* OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.sv | 34.3 | 0.527 |
|
Helsinki-NLP/opus-mt-nso-fi
|
Helsinki-NLP
| 2023-08-16T12:02:05Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-fi
* source languages: nso
* target languages: fi
* OPUS readme: [nso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fi | 27.8 | 0.523 |
|
Helsinki-NLP/opus-mt-nso-de
|
Helsinki-NLP
| 2023-08-16T12:02:01Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-de
* source languages: nso
* target languages: de
* OPUS readme: [nso-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.de | 24.7 | 0.461 |
|
Helsinki-NLP/opus-mt-no-nl
|
Helsinki-NLP
| 2023-08-16T12:01:54Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- no
- nl
tags:
- translation
license: apache-2.0
---
### nor-nld
* source group: Norwegian
* target group: Dutch
* OPUS readme: [nor-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nld | 40.2 | 0.596 |
### System Info:
- hf_name: nor-nld
- source_languages: nor
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'nl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nld
- short_pair: no-nl
- chrF2_score: 0.596
- bleu: 40.2
- brevity_penalty: 0.9590000000000001
- ref_len: 1535.0
- src_name: Norwegian
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: nl
- prefer_old: False
- long_pair: nor-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-fi
|
Helsinki-NLP
| 2023-08-16T12:01:52Z | 149 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- no
- fi
tags:
- translation
license: apache-2.0
---
### nor-fin
* source group: Norwegian
* target group: Finnish
* OPUS readme: [nor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fin | 14.1 | 0.374 |
### System Info:
- hf_name: nor-fin
- source_languages: nor
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fi']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fin
- short_pair: no-fi
- chrF2_score: 0.374
- bleu: 14.1
- brevity_penalty: 0.894
- ref_len: 13066.0
- src_name: Norwegian
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fi
- prefer_old: False
- long_pair: nor-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-es
|
Helsinki-NLP
| 2023-08-16T12:01:51Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- no
- es
tags:
- translation
license: apache-2.0
---
### nor-spa
* source group: Norwegian
* target group: Spanish
* OPUS readme: [nor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.spa | 34.2 | 0.565 |
### System Info:
- hf_name: nor-spa
- source_languages: nor
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'es']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: spa
- short_pair: no-es
- chrF2_score: 0.565
- bleu: 34.2
- brevity_penalty: 0.997
- ref_len: 7311.0
- src_name: Norwegian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: es
- prefer_old: False
- long_pair: nor-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-da
|
Helsinki-NLP
| 2023-08-16T12:01:48Z | 170 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- no
- da
tags:
- translation
license: apache-2.0
---
### nor-dan
* source group: Norwegian
* target group: Danish
* OPUS readme: [nor-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.dan | 65.0 | 0.792 |
### System Info:
- hf_name: nor-dan
- source_languages: nor
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'da']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: dan
- short_pair: no-da
- chrF2_score: 0.792
- bleu: 65.0
- brevity_penalty: 0.995
- ref_len: 9865.0
- src_name: Norwegian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: da
- prefer_old: False
- long_pair: nor-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-nl-sv
|
Helsinki-NLP
| 2023-08-16T12:01:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-sv
* source languages: nl
* target languages: sv
* OPUS readme: [nl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.nl.sv | 25.0 | 0.518 |
|
Helsinki-NLP/opus-mt-nl-no
|
Helsinki-NLP
| 2023-08-16T12:01:45Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- nl
- no
tags:
- translation
license: apache-2.0
---
### nld-nor
* source group: Dutch
* target group: Norwegian
* OPUS readme: [nld-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.nor | 36.1 | 0.562 |
### System Info:
- hf_name: nld-nor
- source_languages: nld
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'no']
- src_constituents: {'nld'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: nor
- short_pair: nl-no
- chrF2_score: 0.562
- bleu: 36.1
- brevity_penalty: 0.966
- ref_len: 1459.0
- src_name: Dutch
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: no
- prefer_old: False
- long_pair: nld-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-nl-fr
|
Helsinki-NLP
| 2023-08-16T12:01:44Z | 34,406 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-fr
* source languages: nl
* target languages: fr
* OPUS readme: [nl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.fr | 51.3 | 0.674 |
|
Helsinki-NLP/opus-mt-nl-es
|
Helsinki-NLP
| 2023-08-16T12:01:42Z | 22,335 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-es
* source languages: nl
* target languages: es
* OPUS readme: [nl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.es | 51.6 | 0.698 |
|
Helsinki-NLP/opus-mt-nl-eo
|
Helsinki-NLP
| 2023-08-16T12:01:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- nl
- eo
tags:
- translation
license: apache-2.0
---
### nld-epo
* source group: Dutch
* target group: Esperanto
* OPUS readme: [nld-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.epo | 16.1 | 0.355 |
### System Info:
- hf_name: nld-epo
- source_languages: nld
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'eo']
- src_constituents: {'nld'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: epo
- short_pair: nl-eo
- chrF2_score: 0.355
- bleu: 16.1
- brevity_penalty: 0.9359999999999999
- ref_len: 72293.0
- src_name: Dutch
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: eo
- prefer_old: False
- long_pair: nld-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-nl-ca
|
Helsinki-NLP
| 2023-08-16T12:01:38Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- nl
- ca
tags:
- translation
license: apache-2.0
---
### nld-cat
* source group: Dutch
* target group: Catalan
* OPUS readme: [nld-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.cat | 42.1 | 0.624 |
### System Info:
- hf_name: nld-cat
- source_languages: nld
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'ca']
- src_constituents: {'nld'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: cat
- short_pair: nl-ca
- chrF2_score: 0.624
- bleu: 42.1
- brevity_penalty: 0.988
- ref_len: 3942.0
- src_name: Dutch
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: ca
- prefer_old: False
- long_pair: nld-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-nl-af
|
Helsinki-NLP
| 2023-08-16T12:01:37Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- nl
- af
tags:
- translation
license: apache-2.0
---
### nld-afr
* source group: Dutch
* target group: Afrikaans
* OPUS readme: [nld-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.afr | 57.8 | 0.749 |
### System Info:
- hf_name: nld-afr
- source_languages: nld
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'af']
- src_constituents: {'nld'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: afr
- short_pair: nl-af
- chrF2_score: 0.7490000000000001
- bleu: 57.8
- brevity_penalty: 1.0
- ref_len: 6823.0
- src_name: Dutch
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: af
- prefer_old: False
- long_pair: nld-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-niu-fi
|
Helsinki-NLP
| 2023-08-16T12:01:33Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-fi
* source languages: niu
* target languages: fi
* OPUS readme: [niu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.fi | 24.8 | 0.474 |
|
Helsinki-NLP/opus-mt-niu-en
|
Helsinki-NLP
| 2023-08-16T12:01:30Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-en
* source languages: niu
* target languages: en
* OPUS readme: [niu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.en | 46.1 | 0.604 |
|
Helsinki-NLP/opus-mt-nic-en
|
Helsinki-NLP
| 2023-08-16T12:01:27Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"nic",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- nic
- en
tags:
- translation
license: apache-2.0
---
### nic-eng
* source group: Niger-Kordofanian languages
* target group: English
* OPUS readme: [nic-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md)
* model: transformer
* source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bam-eng.bam.eng | 2.4 | 0.090 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.3 | 0.384 |
| Tatoeba-test.ful-eng.ful.eng | 1.2 | 0.114 |
| Tatoeba-test.ibo-eng.ibo.eng | 7.5 | 0.197 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.481 |
| Tatoeba-test.lin-eng.lin.eng | 3.1 | 0.185 |
| Tatoeba-test.lug-eng.lug.eng | 3.1 | 0.261 |
| Tatoeba-test.multi.eng | 21.3 | 0.377 |
| Tatoeba-test.nya-eng.nya.eng | 31.6 | 0.502 |
| Tatoeba-test.run-eng.run.eng | 24.9 | 0.420 |
| Tatoeba-test.sag-eng.sag.eng | 5.2 | 0.231 |
| Tatoeba-test.sna-eng.sna.eng | 20.1 | 0.374 |
| Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.191 |
| Tatoeba-test.toi-eng.toi.eng | 4.8 | 0.122 |
| Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 |
| Tatoeba-test.umb-eng.umb.eng | 9.0 | 0.246 |
| Tatoeba-test.wol-eng.wol.eng | 14.0 | 0.212 |
| Tatoeba-test.xho-eng.xho.eng | 38.2 | 0.558 |
| Tatoeba-test.yor-eng.yor.eng | 21.2 | 0.364 |
| Tatoeba-test.zul-eng.zul.eng | 42.3 | 0.589 |
### System Info:
- hf_name: nic-eng
- source_languages: nic
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en']
- src_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt
- src_alpha3: nic
- tgt_alpha3: eng
- short_pair: nic-en
- chrF2_score: 0.377
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 15228.0
- src_name: Niger-Kordofanian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: nic
- tgt_alpha2: en
- prefer_old: False
- long_pair: nic-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-mul-en
|
Helsinki-NLP
| 2023-08-16T12:01:25Z | 178,860 | 72 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"es",
"os",
"eo",
"ro",
"fy",
"cy",
"is",
"lb",
"su",
"an",
"sq",
"fr",
"ht",
"rm",
"cv",
"ig",
"am",
"eu",
"tr",
"ps",
"af",
"ny",
"ch",
"uk",
"sl",
"lt",
"tk",
"sg",
"ar",
"lg",
"bg",
"be",
"ka",
"gd",
"ja",
"si",
"br",
"mh",
"km",
"th",
"ty",
"rw",
"te",
"mk",
"or",
"wo",
"kl",
"mr",
"ru",
"yo",
"hu",
"fo",
"zh",
"ti",
"co",
"ee",
"oc",
"sn",
"mt",
"ts",
"pl",
"gl",
"nb",
"bn",
"tt",
"bo",
"lo",
"id",
"gn",
"nv",
"hy",
"kn",
"to",
"io",
"so",
"vi",
"da",
"fj",
"gv",
"sm",
"nl",
"mi",
"pt",
"hi",
"se",
"as",
"ta",
"et",
"kw",
"ga",
"sv",
"ln",
"na",
"mn",
"gu",
"wa",
"lv",
"jv",
"el",
"my",
"ba",
"it",
"hr",
"ur",
"ce",
"nn",
"fi",
"mg",
"rn",
"xh",
"ab",
"de",
"cs",
"he",
"zu",
"yi",
"ml",
"mul",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ca
- es
- os
- eo
- ro
- fy
- cy
- is
- lb
- su
- an
- sq
- fr
- ht
- rm
- cv
- ig
- am
- eu
- tr
- ps
- af
- ny
- ch
- uk
- sl
- lt
- tk
- sg
- ar
- lg
- bg
- be
- ka
- gd
- ja
- si
- br
- mh
- km
- th
- ty
- rw
- te
- mk
- or
- wo
- kl
- mr
- ru
- yo
- hu
- fo
- zh
- ti
- co
- ee
- oc
- sn
- mt
- ts
- pl
- gl
- nb
- bn
- tt
- bo
- lo
- id
- gn
- nv
- hy
- kn
- to
- io
- so
- vi
- da
- fj
- gv
- sm
- nl
- mi
- pt
- hi
- se
- as
- ta
- et
- kw
- ga
- sv
- ln
- na
- mn
- gu
- wa
- lv
- jv
- el
- my
- ba
- it
- hr
- ur
- ce
- nn
- fi
- mg
- rn
- xh
- ab
- de
- cs
- he
- zu
- yi
- ml
- mul
- en
tags:
- translation
license: apache-2.0
---
### mul-eng
* source group: Multiple languages
* target group: English
* OPUS readme: [mul-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md)
* model: transformer
* source language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.5 | 0.341 |
| newsdev2015-enfi-fineng.fin.eng | 16.8 | 0.441 |
| newsdev2016-enro-roneng.ron.eng | 31.3 | 0.580 |
| newsdev2016-entr-tureng.tur.eng | 16.4 | 0.422 |
| newsdev2017-enlv-laveng.lav.eng | 21.3 | 0.502 |
| newsdev2017-enzh-zhoeng.zho.eng | 12.7 | 0.409 |
| newsdev2018-enet-esteng.est.eng | 19.8 | 0.467 |
| newsdev2019-engu-gujeng.guj.eng | 13.3 | 0.385 |
| newsdev2019-enlt-liteng.lit.eng | 19.9 | 0.482 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 26.7 | 0.520 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 29.8 | 0.541 |
| newssyscomb2009-ceseng.ces.eng | 21.1 | 0.487 |
| newssyscomb2009-deueng.deu.eng | 22.6 | 0.499 |
| newssyscomb2009-fraeng.fra.eng | 25.8 | 0.530 |
| newssyscomb2009-huneng.hun.eng | 15.1 | 0.430 |
| newssyscomb2009-itaeng.ita.eng | 29.4 | 0.555 |
| newssyscomb2009-spaeng.spa.eng | 26.1 | 0.534 |
| news-test2008-deueng.deu.eng | 21.6 | 0.491 |
| news-test2008-fraeng.fra.eng | 22.3 | 0.502 |
| news-test2008-spaeng.spa.eng | 23.6 | 0.514 |
| newstest2009-ceseng.ces.eng | 19.8 | 0.480 |
| newstest2009-deueng.deu.eng | 20.9 | 0.487 |
| newstest2009-fraeng.fra.eng | 25.0 | 0.523 |
| newstest2009-huneng.hun.eng | 14.7 | 0.425 |
| newstest2009-itaeng.ita.eng | 27.6 | 0.542 |
| newstest2009-spaeng.spa.eng | 25.7 | 0.530 |
| newstest2010-ceseng.ces.eng | 20.6 | 0.491 |
| newstest2010-deueng.deu.eng | 23.4 | 0.517 |
| newstest2010-fraeng.fra.eng | 26.1 | 0.537 |
| newstest2010-spaeng.spa.eng | 29.1 | 0.561 |
| newstest2011-ceseng.ces.eng | 21.0 | 0.489 |
| newstest2011-deueng.deu.eng | 21.3 | 0.494 |
| newstest2011-fraeng.fra.eng | 26.8 | 0.546 |
| newstest2011-spaeng.spa.eng | 28.2 | 0.549 |
| newstest2012-ceseng.ces.eng | 20.5 | 0.485 |
| newstest2012-deueng.deu.eng | 22.3 | 0.503 |
| newstest2012-fraeng.fra.eng | 27.5 | 0.545 |
| newstest2012-ruseng.rus.eng | 26.6 | 0.532 |
| newstest2012-spaeng.spa.eng | 30.3 | 0.567 |
| newstest2013-ceseng.ces.eng | 22.5 | 0.498 |
| newstest2013-deueng.deu.eng | 25.0 | 0.518 |
| newstest2013-fraeng.fra.eng | 27.4 | 0.537 |
| newstest2013-ruseng.rus.eng | 21.6 | 0.484 |
| newstest2013-spaeng.spa.eng | 28.4 | 0.555 |
| newstest2014-csen-ceseng.ces.eng | 24.0 | 0.517 |
| newstest2014-deen-deueng.deu.eng | 24.1 | 0.511 |
| newstest2014-fren-fraeng.fra.eng | 29.1 | 0.563 |
| newstest2014-hien-hineng.hin.eng | 14.0 | 0.414 |
| newstest2014-ruen-ruseng.rus.eng | 24.0 | 0.521 |
| newstest2015-encs-ceseng.ces.eng | 21.9 | 0.481 |
| newstest2015-ende-deueng.deu.eng | 25.5 | 0.519 |
| newstest2015-enfi-fineng.fin.eng | 17.4 | 0.441 |
| newstest2015-enru-ruseng.rus.eng | 22.4 | 0.494 |
| newstest2016-encs-ceseng.ces.eng | 23.0 | 0.500 |
| newstest2016-ende-deueng.deu.eng | 30.1 | 0.560 |
| newstest2016-enfi-fineng.fin.eng | 18.5 | 0.461 |
| newstest2016-enro-roneng.ron.eng | 29.6 | 0.562 |
| newstest2016-enru-ruseng.rus.eng | 22.0 | 0.495 |
| newstest2016-entr-tureng.tur.eng | 14.8 | 0.415 |
| newstest2017-encs-ceseng.ces.eng | 20.2 | 0.475 |
| newstest2017-ende-deueng.deu.eng | 26.0 | 0.523 |
| newstest2017-enfi-fineng.fin.eng | 19.6 | 0.465 |
| newstest2017-enlv-laveng.lav.eng | 16.2 | 0.454 |
| newstest2017-enru-ruseng.rus.eng | 24.2 | 0.510 |
| newstest2017-entr-tureng.tur.eng | 15.0 | 0.412 |
| newstest2017-enzh-zhoeng.zho.eng | 13.7 | 0.412 |
| newstest2018-encs-ceseng.ces.eng | 21.2 | 0.486 |
| newstest2018-ende-deueng.deu.eng | 31.5 | 0.564 |
| newstest2018-enet-esteng.est.eng | 19.7 | 0.473 |
| newstest2018-enfi-fineng.fin.eng | 15.1 | 0.418 |
| newstest2018-enru-ruseng.rus.eng | 21.3 | 0.490 |
| newstest2018-entr-tureng.tur.eng | 15.4 | 0.421 |
| newstest2018-enzh-zhoeng.zho.eng | 12.9 | 0.408 |
| newstest2019-deen-deueng.deu.eng | 27.0 | 0.529 |
| newstest2019-fien-fineng.fin.eng | 17.2 | 0.438 |
| newstest2019-guen-gujeng.guj.eng | 9.0 | 0.342 |
| newstest2019-lten-liteng.lit.eng | 22.6 | 0.512 |
| newstest2019-ruen-ruseng.rus.eng | 24.1 | 0.503 |
| newstest2019-zhen-zhoeng.zho.eng | 13.9 | 0.427 |
| newstestB2016-enfi-fineng.fin.eng | 15.2 | 0.428 |
| newstestB2017-enfi-fineng.fin.eng | 16.8 | 0.442 |
| newstestB2017-fien-fineng.fin.eng | 16.8 | 0.442 |
| Tatoeba-test.abk-eng.abk.eng | 2.4 | 0.190 |
| Tatoeba-test.ady-eng.ady.eng | 1.1 | 0.111 |
| Tatoeba-test.afh-eng.afh.eng | 1.7 | 0.108 |
| Tatoeba-test.afr-eng.afr.eng | 53.0 | 0.672 |
| Tatoeba-test.akl-eng.akl.eng | 5.9 | 0.239 |
| Tatoeba-test.amh-eng.amh.eng | 25.6 | 0.464 |
| Tatoeba-test.ang-eng.ang.eng | 11.7 | 0.289 |
| Tatoeba-test.ara-eng.ara.eng | 26.4 | 0.443 |
| Tatoeba-test.arg-eng.arg.eng | 35.9 | 0.473 |
| Tatoeba-test.asm-eng.asm.eng | 19.8 | 0.365 |
| Tatoeba-test.ast-eng.ast.eng | 31.8 | 0.467 |
| Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.119 |
| Tatoeba-test.awa-eng.awa.eng | 9.7 | 0.271 |
| Tatoeba-test.aze-eng.aze.eng | 37.0 | 0.542 |
| Tatoeba-test.bak-eng.bak.eng | 13.9 | 0.395 |
| Tatoeba-test.bam-eng.bam.eng | 2.2 | 0.094 |
| Tatoeba-test.bel-eng.bel.eng | 36.8 | 0.549 |
| Tatoeba-test.ben-eng.ben.eng | 39.7 | 0.546 |
| Tatoeba-test.bho-eng.bho.eng | 33.6 | 0.540 |
| Tatoeba-test.bod-eng.bod.eng | 1.1 | 0.147 |
| Tatoeba-test.bre-eng.bre.eng | 14.2 | 0.303 |
| Tatoeba-test.brx-eng.brx.eng | 1.7 | 0.130 |
| Tatoeba-test.bul-eng.bul.eng | 46.0 | 0.621 |
| Tatoeba-test.cat-eng.cat.eng | 46.6 | 0.636 |
| Tatoeba-test.ceb-eng.ceb.eng | 17.4 | 0.347 |
| Tatoeba-test.ces-eng.ces.eng | 41.3 | 0.586 |
| Tatoeba-test.cha-eng.cha.eng | 7.9 | 0.232 |
| Tatoeba-test.che-eng.che.eng | 0.7 | 0.104 |
| Tatoeba-test.chm-eng.chm.eng | 7.3 | 0.261 |
| Tatoeba-test.chr-eng.chr.eng | 8.8 | 0.244 |
| Tatoeba-test.chv-eng.chv.eng | 11.0 | 0.319 |
| Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.204 |
| Tatoeba-test.cos-eng.cos.eng | 58.2 | 0.643 |
| Tatoeba-test.crh-eng.crh.eng | 26.3 | 0.399 |
| Tatoeba-test.csb-eng.csb.eng | 18.8 | 0.389 |
| Tatoeba-test.cym-eng.cym.eng | 23.4 | 0.407 |
| Tatoeba-test.dan-eng.dan.eng | 50.5 | 0.659 |
| Tatoeba-test.deu-eng.deu.eng | 39.6 | 0.579 |
| Tatoeba-test.dsb-eng.dsb.eng | 24.3 | 0.449 |
| Tatoeba-test.dtp-eng.dtp.eng | 1.0 | 0.149 |
| Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.061 |
| Tatoeba-test.egl-eng.egl.eng | 7.6 | 0.236 |
| Tatoeba-test.ell-eng.ell.eng | 55.4 | 0.682 |
| Tatoeba-test.enm-eng.enm.eng | 28.0 | 0.489 |
| Tatoeba-test.epo-eng.epo.eng | 41.8 | 0.591 |
| Tatoeba-test.est-eng.est.eng | 41.5 | 0.581 |
| Tatoeba-test.eus-eng.eus.eng | 37.8 | 0.557 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.7 | 0.262 |
| Tatoeba-test.ext-eng.ext.eng | 25.5 | 0.405 |
| Tatoeba-test.fao-eng.fao.eng | 28.7 | 0.469 |
| Tatoeba-test.fas-eng.fas.eng | 7.5 | 0.281 |
| Tatoeba-test.fij-eng.fij.eng | 24.2 | 0.320 |
| Tatoeba-test.fin-eng.fin.eng | 35.8 | 0.534 |
| Tatoeba-test.fkv-eng.fkv.eng | 15.5 | 0.434 |
| Tatoeba-test.fra-eng.fra.eng | 45.1 | 0.618 |
| Tatoeba-test.frm-eng.frm.eng | 29.6 | 0.427 |
| Tatoeba-test.frr-eng.frr.eng | 5.5 | 0.138 |
| Tatoeba-test.fry-eng.fry.eng | 25.3 | 0.455 |
| Tatoeba-test.ful-eng.ful.eng | 1.1 | 0.127 |
| Tatoeba-test.gcf-eng.gcf.eng | 16.0 | 0.315 |
| Tatoeba-test.gil-eng.gil.eng | 46.7 | 0.587 |
| Tatoeba-test.gla-eng.gla.eng | 20.2 | 0.358 |
| Tatoeba-test.gle-eng.gle.eng | 43.9 | 0.592 |
| Tatoeba-test.glg-eng.glg.eng | 45.1 | 0.623 |
| Tatoeba-test.glv-eng.glv.eng | 3.3 | 0.119 |
| Tatoeba-test.gos-eng.gos.eng | 20.1 | 0.364 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.041 |
| Tatoeba-test.grc-eng.grc.eng | 2.1 | 0.137 |
| Tatoeba-test.grn-eng.grn.eng | 1.7 | 0.152 |
| Tatoeba-test.gsw-eng.gsw.eng | 18.2 | 0.334 |
| Tatoeba-test.guj-eng.guj.eng | 21.7 | 0.373 |
| Tatoeba-test.hat-eng.hat.eng | 34.5 | 0.502 |
| Tatoeba-test.hau-eng.hau.eng | 10.5 | 0.295 |
| Tatoeba-test.haw-eng.haw.eng | 2.8 | 0.160 |
| Tatoeba-test.hbs-eng.hbs.eng | 46.7 | 0.623 |
| Tatoeba-test.heb-eng.heb.eng | 33.0 | 0.492 |
| Tatoeba-test.hif-eng.hif.eng | 17.0 | 0.391 |
| Tatoeba-test.hil-eng.hil.eng | 16.0 | 0.339 |
| Tatoeba-test.hin-eng.hin.eng | 36.4 | 0.533 |
| Tatoeba-test.hmn-eng.hmn.eng | 0.4 | 0.131 |
| Tatoeba-test.hoc-eng.hoc.eng | 0.7 | 0.132 |
| Tatoeba-test.hsb-eng.hsb.eng | 41.9 | 0.551 |
| Tatoeba-test.hun-eng.hun.eng | 33.2 | 0.510 |
| Tatoeba-test.hye-eng.hye.eng | 32.2 | 0.487 |
| Tatoeba-test.iba-eng.iba.eng | 9.4 | 0.278 |
| Tatoeba-test.ibo-eng.ibo.eng | 5.8 | 0.200 |
| Tatoeba-test.ido-eng.ido.eng | 31.7 | 0.503 |
| Tatoeba-test.iku-eng.iku.eng | 9.1 | 0.164 |
| Tatoeba-test.ile-eng.ile.eng | 42.2 | 0.595 |
| Tatoeba-test.ilo-eng.ilo.eng | 29.7 | 0.485 |
| Tatoeba-test.ina-eng.ina.eng | 42.1 | 0.607 |
| Tatoeba-test.isl-eng.isl.eng | 35.7 | 0.527 |
| Tatoeba-test.ita-eng.ita.eng | 54.8 | 0.686 |
| Tatoeba-test.izh-eng.izh.eng | 28.3 | 0.526 |
| Tatoeba-test.jav-eng.jav.eng | 10.0 | 0.282 |
| Tatoeba-test.jbo-eng.jbo.eng | 0.3 | 0.115 |
| Tatoeba-test.jdt-eng.jdt.eng | 5.3 | 0.140 |
| Tatoeba-test.jpn-eng.jpn.eng | 18.8 | 0.387 |
| Tatoeba-test.kab-eng.kab.eng | 3.9 | 0.205 |
| Tatoeba-test.kal-eng.kal.eng | 16.9 | 0.329 |
| Tatoeba-test.kan-eng.kan.eng | 16.2 | 0.374 |
| Tatoeba-test.kat-eng.kat.eng | 31.1 | 0.493 |
| Tatoeba-test.kaz-eng.kaz.eng | 24.5 | 0.437 |
| Tatoeba-test.kek-eng.kek.eng | 7.4 | 0.192 |
| Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.154 |
| Tatoeba-test.khm-eng.khm.eng | 12.2 | 0.290 |
| Tatoeba-test.kin-eng.kin.eng | 22.5 | 0.355 |
| Tatoeba-test.kir-eng.kir.eng | 27.2 | 0.470 |
| Tatoeba-test.kjh-eng.kjh.eng | 2.1 | 0.129 |
| Tatoeba-test.kok-eng.kok.eng | 4.5 | 0.259 |
| Tatoeba-test.kom-eng.kom.eng | 1.4 | 0.099 |
| Tatoeba-test.krl-eng.krl.eng | 26.1 | 0.387 |
| Tatoeba-test.ksh-eng.ksh.eng | 5.5 | 0.256 |
| Tatoeba-test.kum-eng.kum.eng | 9.3 | 0.288 |
| Tatoeba-test.kur-eng.kur.eng | 9.6 | 0.208 |
| Tatoeba-test.lad-eng.lad.eng | 30.1 | 0.475 |
| Tatoeba-test.lah-eng.lah.eng | 11.6 | 0.284 |
| Tatoeba-test.lao-eng.lao.eng | 4.5 | 0.214 |
| Tatoeba-test.lat-eng.lat.eng | 21.5 | 0.402 |
| Tatoeba-test.lav-eng.lav.eng | 40.2 | 0.577 |
| Tatoeba-test.ldn-eng.ldn.eng | 0.8 | 0.115 |
| Tatoeba-test.lfn-eng.lfn.eng | 23.0 | 0.433 |
| Tatoeba-test.lij-eng.lij.eng | 9.3 | 0.287 |
| Tatoeba-test.lin-eng.lin.eng | 2.4 | 0.196 |
| Tatoeba-test.lit-eng.lit.eng | 44.0 | 0.597 |
| Tatoeba-test.liv-eng.liv.eng | 1.6 | 0.115 |
| Tatoeba-test.lkt-eng.lkt.eng | 2.0 | 0.113 |
| Tatoeba-test.lld-eng.lld.eng | 18.3 | 0.312 |
| Tatoeba-test.lmo-eng.lmo.eng | 25.4 | 0.395 |
| Tatoeba-test.ltz-eng.ltz.eng | 35.9 | 0.509 |
| Tatoeba-test.lug-eng.lug.eng | 5.1 | 0.357 |
| Tatoeba-test.mad-eng.mad.eng | 2.8 | 0.123 |
| Tatoeba-test.mah-eng.mah.eng | 5.7 | 0.175 |
| Tatoeba-test.mai-eng.mai.eng | 56.3 | 0.703 |
| Tatoeba-test.mal-eng.mal.eng | 37.5 | 0.534 |
| Tatoeba-test.mar-eng.mar.eng | 22.8 | 0.470 |
| Tatoeba-test.mdf-eng.mdf.eng | 2.0 | 0.110 |
| Tatoeba-test.mfe-eng.mfe.eng | 59.2 | 0.764 |
| Tatoeba-test.mic-eng.mic.eng | 9.0 | 0.199 |
| Tatoeba-test.mkd-eng.mkd.eng | 44.3 | 0.593 |
| Tatoeba-test.mlg-eng.mlg.eng | 31.9 | 0.424 |
| Tatoeba-test.mlt-eng.mlt.eng | 38.6 | 0.540 |
| Tatoeba-test.mnw-eng.mnw.eng | 2.5 | 0.101 |
| Tatoeba-test.moh-eng.moh.eng | 0.3 | 0.110 |
| Tatoeba-test.mon-eng.mon.eng | 13.5 | 0.334 |
| Tatoeba-test.mri-eng.mri.eng | 8.5 | 0.260 |
| Tatoeba-test.msa-eng.msa.eng | 33.9 | 0.520 |
| Tatoeba-test.multi.eng | 34.7 | 0.518 |
| Tatoeba-test.mwl-eng.mwl.eng | 37.4 | 0.630 |
| Tatoeba-test.mya-eng.mya.eng | 15.5 | 0.335 |
| Tatoeba-test.myv-eng.myv.eng | 0.8 | 0.118 |
| Tatoeba-test.nau-eng.nau.eng | 9.0 | 0.186 |
| Tatoeba-test.nav-eng.nav.eng | 1.3 | 0.144 |
| Tatoeba-test.nds-eng.nds.eng | 30.7 | 0.495 |
| Tatoeba-test.nep-eng.nep.eng | 3.5 | 0.168 |
| Tatoeba-test.niu-eng.niu.eng | 42.7 | 0.492 |
| Tatoeba-test.nld-eng.nld.eng | 47.9 | 0.640 |
| Tatoeba-test.nog-eng.nog.eng | 12.7 | 0.284 |
| Tatoeba-test.non-eng.non.eng | 43.8 | 0.586 |
| Tatoeba-test.nor-eng.nor.eng | 45.5 | 0.619 |
| Tatoeba-test.nov-eng.nov.eng | 26.9 | 0.472 |
| Tatoeba-test.nya-eng.nya.eng | 33.2 | 0.456 |
| Tatoeba-test.oci-eng.oci.eng | 17.9 | 0.370 |
| Tatoeba-test.ori-eng.ori.eng | 14.6 | 0.305 |
| Tatoeba-test.orv-eng.orv.eng | 11.0 | 0.283 |
| Tatoeba-test.oss-eng.oss.eng | 4.1 | 0.211 |
| Tatoeba-test.ota-eng.ota.eng | 4.1 | 0.216 |
| Tatoeba-test.pag-eng.pag.eng | 24.3 | 0.468 |
| Tatoeba-test.pan-eng.pan.eng | 16.4 | 0.358 |
| Tatoeba-test.pap-eng.pap.eng | 53.2 | 0.628 |
| Tatoeba-test.pau-eng.pau.eng | 3.7 | 0.173 |
| Tatoeba-test.pdc-eng.pdc.eng | 45.3 | 0.569 |
| Tatoeba-test.pms-eng.pms.eng | 14.0 | 0.345 |
| Tatoeba-test.pol-eng.pol.eng | 41.7 | 0.588 |
| Tatoeba-test.por-eng.por.eng | 51.4 | 0.669 |
| Tatoeba-test.ppl-eng.ppl.eng | 0.4 | 0.134 |
| Tatoeba-test.prg-eng.prg.eng | 4.1 | 0.198 |
| Tatoeba-test.pus-eng.pus.eng | 6.7 | 0.233 |
| Tatoeba-test.quc-eng.quc.eng | 3.5 | 0.091 |
| Tatoeba-test.qya-eng.qya.eng | 0.2 | 0.090 |
| Tatoeba-test.rap-eng.rap.eng | 17.5 | 0.230 |
| Tatoeba-test.rif-eng.rif.eng | 4.2 | 0.164 |
| Tatoeba-test.roh-eng.roh.eng | 24.6 | 0.464 |
| Tatoeba-test.rom-eng.rom.eng | 3.4 | 0.212 |
| Tatoeba-test.ron-eng.ron.eng | 45.2 | 0.620 |
| Tatoeba-test.rue-eng.rue.eng | 21.4 | 0.390 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.392 |
| Tatoeba-test.rus-eng.rus.eng | 42.7 | 0.591 |
| Tatoeba-test.sag-eng.sag.eng | 3.4 | 0.187 |
| Tatoeba-test.sah-eng.sah.eng | 5.0 | 0.177 |
| Tatoeba-test.san-eng.san.eng | 2.0 | 0.172 |
| Tatoeba-test.scn-eng.scn.eng | 35.8 | 0.410 |
| Tatoeba-test.sco-eng.sco.eng | 34.6 | 0.520 |
| Tatoeba-test.sgs-eng.sgs.eng | 21.8 | 0.299 |
| Tatoeba-test.shs-eng.shs.eng | 1.8 | 0.122 |
| Tatoeba-test.shy-eng.shy.eng | 1.4 | 0.104 |
| Tatoeba-test.sin-eng.sin.eng | 20.6 | 0.429 |
| Tatoeba-test.sjn-eng.sjn.eng | 1.2 | 0.095 |
| Tatoeba-test.slv-eng.slv.eng | 37.0 | 0.545 |
| Tatoeba-test.sma-eng.sma.eng | 4.4 | 0.147 |
| Tatoeba-test.sme-eng.sme.eng | 8.9 | 0.229 |
| Tatoeba-test.smo-eng.smo.eng | 37.7 | 0.483 |
| Tatoeba-test.sna-eng.sna.eng | 18.0 | 0.359 |
| Tatoeba-test.snd-eng.snd.eng | 28.1 | 0.444 |
| Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 |
| Tatoeba-test.spa-eng.spa.eng | 47.9 | 0.645 |
| Tatoeba-test.sqi-eng.sqi.eng | 46.9 | 0.634 |
| Tatoeba-test.stq-eng.stq.eng | 8.1 | 0.379 |
| Tatoeba-test.sun-eng.sun.eng | 23.8 | 0.369 |
| Tatoeba-test.swa-eng.swa.eng | 6.5 | 0.193 |
| Tatoeba-test.swe-eng.swe.eng | 51.4 | 0.655 |
| Tatoeba-test.swg-eng.swg.eng | 18.5 | 0.342 |
| Tatoeba-test.tah-eng.tah.eng | 25.6 | 0.249 |
| Tatoeba-test.tam-eng.tam.eng | 29.1 | 0.437 |
| Tatoeba-test.tat-eng.tat.eng | 12.9 | 0.327 |
| Tatoeba-test.tel-eng.tel.eng | 21.2 | 0.386 |
| Tatoeba-test.tet-eng.tet.eng | 9.2 | 0.215 |
| Tatoeba-test.tgk-eng.tgk.eng | 12.7 | 0.374 |
| Tatoeba-test.tha-eng.tha.eng | 36.3 | 0.531 |
| Tatoeba-test.tir-eng.tir.eng | 9.1 | 0.267 |
| Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.084 |
| Tatoeba-test.tly-eng.tly.eng | 2.1 | 0.128 |
| Tatoeba-test.toi-eng.toi.eng | 5.3 | 0.150 |
| Tatoeba-test.ton-eng.ton.eng | 39.5 | 0.473 |
| Tatoeba-test.tpw-eng.tpw.eng | 1.5 | 0.160 |
| Tatoeba-test.tso-eng.tso.eng | 44.7 | 0.526 |
| Tatoeba-test.tuk-eng.tuk.eng | 18.6 | 0.401 |
| Tatoeba-test.tur-eng.tur.eng | 40.5 | 0.573 |
| Tatoeba-test.tvl-eng.tvl.eng | 55.0 | 0.593 |
| Tatoeba-test.tyv-eng.tyv.eng | 19.1 | 0.477 |
| Tatoeba-test.tzl-eng.tzl.eng | 17.7 | 0.333 |
| Tatoeba-test.udm-eng.udm.eng | 3.4 | 0.217 |
| Tatoeba-test.uig-eng.uig.eng | 11.4 | 0.289 |
| Tatoeba-test.ukr-eng.ukr.eng | 43.1 | 0.595 |
| Tatoeba-test.umb-eng.umb.eng | 9.2 | 0.260 |
| Tatoeba-test.urd-eng.urd.eng | 23.2 | 0.426 |
| Tatoeba-test.uzb-eng.uzb.eng | 19.0 | 0.342 |
| Tatoeba-test.vec-eng.vec.eng | 41.1 | 0.409 |
| Tatoeba-test.vie-eng.vie.eng | 30.6 | 0.481 |
| Tatoeba-test.vol-eng.vol.eng | 1.8 | 0.143 |
| Tatoeba-test.war-eng.war.eng | 15.9 | 0.352 |
| Tatoeba-test.wln-eng.wln.eng | 12.6 | 0.291 |
| Tatoeba-test.wol-eng.wol.eng | 4.4 | 0.138 |
| Tatoeba-test.xal-eng.xal.eng | 0.9 | 0.153 |
| Tatoeba-test.xho-eng.xho.eng | 35.4 | 0.513 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.387 |
| Tatoeba-test.yor-eng.yor.eng | 19.3 | 0.327 |
| Tatoeba-test.zho-eng.zho.eng | 25.8 | 0.448 |
| Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.567 |
| Tatoeba-test.zza-eng.zza.eng | 1.6 | 0.125 |
### System Info:
- hf_name: mul-eng
- source_languages: mul
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en']
- src_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt
- src_alpha3: mul
- tgt_alpha3: eng
- short_pair: mul-en
- chrF2_score: 0.518
- bleu: 34.7
- brevity_penalty: 1.0
- ref_len: 72346.0
- src_name: Multiple languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: mul
- tgt_alpha2: en
- prefer_old: False
- long_pair: mul-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-mt-sv
|
Helsinki-NLP
| 2023-08-16T12:01:24Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-mt-sv
* source languages: mt
* target languages: sv
* OPUS readme: [mt-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.sv | 30.4 | 0.514 |
|
Helsinki-NLP/opus-mt-ms-de
|
Helsinki-NLP
| 2023-08-16T12:01:15Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ms",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ms
- de
tags:
- translation
license: apache-2.0
---
### msa-deu
* source group: Malay (macrolanguage)
* target group: German
* OPUS readme: [msa-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-deu/README.md)
* model: transformer-align
* source language(s): ind zsm_Latn
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa.deu | 36.5 | 0.584 |
### System Info:
- hf_name: msa-deu
- source_languages: msa
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ms', 'de']
- src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.test.txt
- src_alpha3: msa
- tgt_alpha3: deu
- short_pair: ms-de
- chrF2_score: 0.584
- bleu: 36.5
- brevity_penalty: 0.966
- ref_len: 4198.0
- src_name: Malay (macrolanguage)
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: ms
- tgt_alpha2: de
- prefer_old: False
- long_pair: msa-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-mr-en
|
Helsinki-NLP
| 2023-08-16T12:01:14Z | 868 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-mr-en
* source languages: mr
* target languages: en
* OPUS readme: [mr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mr-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.mr.en | 38.2 | 0.515 |
|
Helsinki-NLP/opus-mt-ml-en
|
Helsinki-NLP
| 2023-08-16T12:01:12Z | 493 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ml",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ml-en
* source languages: ml
* target languages: en
* OPUS readme: [ml-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ml-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.zip)
* test set translations: [opus-2020-04-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.test.txt)
* test set scores: [opus-2020-04-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ml.en | 42.7 | 0.605 |
|
Helsinki-NLP/opus-mt-mkh-en
|
Helsinki-NLP
| 2023-08-16T12:01:11Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"km",
"mkh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- vi
- km
- mkh
- en
tags:
- translation
license: apache-2.0
---
### mkh-eng
* source group: Mon-Khmer languages
* target group: English
* OPUS readme: [mkh-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md)
* model: transformer
* source language(s): kha khm khm_Latn mnw vie vie_Hani
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kha-eng.kha.eng | 0.5 | 0.108 |
| Tatoeba-test.khm-eng.khm.eng | 8.5 | 0.206 |
| Tatoeba-test.mnw-eng.mnw.eng | 0.7 | 0.110 |
| Tatoeba-test.multi.eng | 24.5 | 0.407 |
| Tatoeba-test.vie-eng.vie.eng | 34.4 | 0.529 |
### System Info:
- hf_name: mkh-eng
- source_languages: mkh
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'km', 'mkh', 'en']
- src_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt
- src_alpha3: mkh
- tgt_alpha3: eng
- short_pair: mkh-en
- chrF2_score: 0.40700000000000003
- bleu: 24.5
- brevity_penalty: 1.0
- ref_len: 33985.0
- src_name: Mon-Khmer languages
- tgt_name: English
- train_date: 2020-07-27
- src_alpha2: mkh
- tgt_alpha2: en
- prefer_old: False
- long_pair: mkh-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-mk-fr
|
Helsinki-NLP
| 2023-08-16T12:01:09Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-mk-fr
* source languages: mk
* target languages: fr
* OPUS readme: [mk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.mk.fr | 22.3 | 0.492 |
|
Helsinki-NLP/opus-mt-mk-fi
|
Helsinki-NLP
| 2023-08-16T12:01:08Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-mk-fi
* source languages: mk
* target languages: fi
* OPUS readme: [mk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mk.fi | 25.9 | 0.498 |
|
Helsinki-NLP/opus-mt-mk-es
|
Helsinki-NLP
| 2023-08-16T12:01:07Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- mk
- es
tags:
- translation
license: apache-2.0
---
### mkd-spa
* source group: Macedonian
* target group: Spanish
* OPUS readme: [mkd-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md)
* model: transformer-align
* source language(s): mkd
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.mkd.spa | 56.5 | 0.717 |
### System Info:
- hf_name: mkd-spa
- source_languages: mkd
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mk', 'es']
- src_constituents: {'mkd'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt
- src_alpha3: mkd
- tgt_alpha3: spa
- short_pair: mk-es
- chrF2_score: 0.7170000000000001
- bleu: 56.5
- brevity_penalty: 0.997
- ref_len: 1121.0
- src_name: Macedonian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: mk
- tgt_alpha2: es
- prefer_old: False
- long_pair: mkd-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-mk-en
|
Helsinki-NLP
| 2023-08-16T12:01:06Z | 6,661 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-mk-en
* source languages: mk
* target languages: en
* OPUS readme: [mk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.mk.en | 59.8 | 0.720 |
|
Helsinki-NLP/opus-mt-mh-en
|
Helsinki-NLP
| 2023-08-16T12:01:02Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-mh-en
* source languages: mh
* target languages: en
* OPUS readme: [mh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mh.en | 36.5 | 0.505 |
|
Helsinki-NLP/opus-mt-mfs-es
|
Helsinki-NLP
| 2023-08-16T12:00:59Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mfs",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-mfs-es
* source languages: mfs
* target languages: es
* OPUS readme: [mfs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mfs.es | 88.9 | 0.910 |
|
Helsinki-NLP/opus-mt-lv-sv
|
Helsinki-NLP
| 2023-08-16T12:00:55Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lv-sv
* source languages: lv
* target languages: sv
* OPUS readme: [lv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.sv | 22.0 | 0.444 |
|
Helsinki-NLP/opus-mt-lv-ru
|
Helsinki-NLP
| 2023-08-16T12:00:54Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- lv
- ru
tags:
- translation
license: apache-2.0
---
### lav-rus
* source group: Latvian
* target group: Russian
* OPUS readme: [lav-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md)
* model: transformer-align
* source language(s): lav
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lav.rus | 53.3 | 0.702 |
### System Info:
- hf_name: lav-rus
- source_languages: lav
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lv', 'ru']
- src_constituents: {'lav'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt
- src_alpha3: lav
- tgt_alpha3: rus
- short_pair: lv-ru
- chrF2_score: 0.7020000000000001
- bleu: 53.3
- brevity_penalty: 0.9840000000000001
- ref_len: 1541.0
- src_name: Latvian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: lv
- tgt_alpha2: ru
- prefer_old: False
- long_pair: lav-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-lv-fi
|
Helsinki-NLP
| 2023-08-16T12:00:51Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lv-fi
* source languages: lv
* target languages: fi
* OPUS readme: [lv-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.fi | 20.6 | 0.469 |
|
Helsinki-NLP/opus-mt-lv-es
|
Helsinki-NLP
| 2023-08-16T12:00:50Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lv-es
* source languages: lv
* target languages: es
* OPUS readme: [lv-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.es | 21.7 | 0.433 |
|
Helsinki-NLP/opus-mt-lv-en
|
Helsinki-NLP
| 2023-08-16T12:00:49Z | 9,463 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lv-en
* source languages: lv
* target languages: en
* OPUS readme: [lv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enlv.lv.en | 29.9 | 0.587 |
| newstest2017-enlv.lv.en | 22.1 | 0.526 |
| Tatoeba.lv.en | 53.3 | 0.707 |
|
Helsinki-NLP/opus-mt-lus-sv
|
Helsinki-NLP
| 2023-08-16T12:00:48Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lus-sv
* source languages: lus
* target languages: sv
* OPUS readme: [lus-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.sv | 25.5 | 0.439 |
|
Helsinki-NLP/opus-mt-lus-fr
|
Helsinki-NLP
| 2023-08-16T12:00:47Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lus-fr
* source languages: lus
* target languages: fr
* OPUS readme: [lus-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.fr | 25.5 | 0.423 |
|
Helsinki-NLP/opus-mt-lus-en
|
Helsinki-NLP
| 2023-08-16T12:00:43Z | 117 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lus-en
* source languages: lus
* target languages: en
* OPUS readme: [lus-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.en | 37.0 | 0.534 |
|
Helsinki-NLP/opus-mt-lun-en
|
Helsinki-NLP
| 2023-08-16T12:00:40Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lun",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lun-en
* source languages: lun
* target languages: en
* OPUS readme: [lun-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lun-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lun.en | 30.6 | 0.466 |
|
Helsinki-NLP/opus-mt-lue-en
|
Helsinki-NLP
| 2023-08-16T12:00:34Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lue",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lue-en
* source languages: lue
* target languages: en
* OPUS readme: [lue-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.en | 31.7 | 0.469 |
|
Helsinki-NLP/opus-mt-lua-sv
|
Helsinki-NLP
| 2023-08-16T12:00:33Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lua",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-sv
* source languages: lua
* target languages: sv
* OPUS readme: [lua-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.sv | 25.7 | 0.437 |
|
Helsinki-NLP/opus-mt-lua-fr
|
Helsinki-NLP
| 2023-08-16T12:00:32Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lua",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-fr
* source languages: lua
* target languages: fr
* OPUS readme: [lua-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.fr | 25.7 | 0.429 |
|
Helsinki-NLP/opus-mt-lua-es
|
Helsinki-NLP
| 2023-08-16T12:00:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lua",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-es
* source languages: lua
* target languages: es
* OPUS readme: [lua-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.es | 23.1 | 0.409 |
|
Helsinki-NLP/opus-mt-lua-en
|
Helsinki-NLP
| 2023-08-16T12:00:29Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lua",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-en
* source languages: lua
* target languages: en
* OPUS readme: [lua-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.en | 34.4 | 0.502 |
|
Helsinki-NLP/opus-mt-lu-fi
|
Helsinki-NLP
| 2023-08-16T12:00:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lu",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lu-fi
* source languages: lu
* target languages: fi
* OPUS readme: [lu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lu.fi | 21.4 | 0.442 |
|
Helsinki-NLP/opus-mt-lu-es
|
Helsinki-NLP
| 2023-08-16T12:00:24Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lu",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lu-es
* source languages: lu
* target languages: es
* OPUS readme: [lu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lu.es | 22.4 | 0.400 |
|
Helsinki-NLP/opus-mt-lu-en
|
Helsinki-NLP
| 2023-08-16T12:00:23Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lu",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lu-en
* source languages: lu
* target languages: en
* OPUS readme: [lu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lu.en | 35.7 | 0.517 |
|
Helsinki-NLP/opus-mt-lt-ru
|
Helsinki-NLP
| 2023-08-16T12:00:19Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lt",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- lt
- ru
tags:
- translation
license: apache-2.0
---
### lit-rus
* source group: Lithuanian
* target group: Russian
* OPUS readme: [lit-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md)
* model: transformer-align
* source language(s): lit
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lit.rus | 51.7 | 0.695 |
### System Info:
- hf_name: lit-rus
- source_languages: lit
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'ru']
- src_constituents: {'lit'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt
- src_alpha3: lit
- tgt_alpha3: rus
- short_pair: lt-ru
- chrF2_score: 0.695
- bleu: 51.7
- brevity_penalty: 0.982
- ref_len: 15395.0
- src_name: Lithuanian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: lt
- tgt_alpha2: ru
- prefer_old: False
- long_pair: lit-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-lt-eo
|
Helsinki-NLP
| 2023-08-16T12:00:14Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lt",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- lt
- eo
tags:
- translation
license: apache-2.0
---
### lit-epo
* source group: Lithuanian
* target group: Esperanto
* OPUS readme: [lit-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md)
* model: transformer-align
* source language(s): lit
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lit.epo | 13.0 | 0.313 |
### System Info:
- hf_name: lit-epo
- source_languages: lit
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'eo']
- src_constituents: {'lit'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt
- src_alpha3: lit
- tgt_alpha3: epo
- short_pair: lt-eo
- chrF2_score: 0.313
- bleu: 13.0
- brevity_penalty: 1.0
- ref_len: 70340.0
- src_name: Lithuanian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: lt
- tgt_alpha2: eo
- prefer_old: False
- long_pair: lit-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-loz-en
|
Helsinki-NLP
| 2023-08-16T12:00:07Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"loz",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-loz-en
* source languages: loz
* target languages: en
* OPUS readme: [loz-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.loz.en | 42.1 | 0.565 |
|
Helsinki-NLP/opus-mt-ln-fr
|
Helsinki-NLP
| 2023-08-16T12:00:05Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ln",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ln-fr
* source languages: ln
* target languages: fr
* OPUS readme: [ln-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ln.fr | 28.4 | 0.456 |
|
Helsinki-NLP/opus-mt-ln-es
|
Helsinki-NLP
| 2023-08-16T12:00:03Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ln",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ln-es
* source languages: ln
* target languages: es
* OPUS readme: [ln-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ln.es | 26.5 | 0.444 |
|
Helsinki-NLP/opus-mt-ln-en
|
Helsinki-NLP
| 2023-08-16T12:00:02Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ln",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ln-en
* source languages: ln
* target languages: en
* OPUS readme: [ln-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ln.en | 35.9 | 0.516 |
|
Helsinki-NLP/opus-mt-lg-sv
|
Helsinki-NLP
| 2023-08-16T12:00:00Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lg",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-sv
* source languages: lg
* target languages: sv
* OPUS readme: [lg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.sv | 24.5 | 0.423 |
|
Helsinki-NLP/opus-mt-lg-fi
|
Helsinki-NLP
| 2023-08-16T11:59:58Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lg",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-fi
* source languages: lg
* target languages: fi
* OPUS readme: [lg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.fi | 21.8 | 0.424 |
|
Helsinki-NLP/opus-mt-lg-es
|
Helsinki-NLP
| 2023-08-16T11:59:57Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lg",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-es
* source languages: lg
* target languages: es
* OPUS readme: [lg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.es | 22.1 | 0.393 |
|
Helsinki-NLP/opus-mt-lg-en
|
Helsinki-NLP
| 2023-08-16T11:59:55Z | 1,921 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lg",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-en
* source languages: lg
* target languages: en
* OPUS readme: [lg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.en | 32.6 | 0.480 |
| Tatoeba.lg.en | 5.4 | 0.243 |
|
Helsinki-NLP/opus-mt-kwy-en
|
Helsinki-NLP
| 2023-08-16T11:59:52Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"kwy",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-kwy-en
* source languages: kwy
* target languages: en
* OPUS readme: [kwy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwy-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kwy.en | 31.6 | 0.466 |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.