modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-fr-fj | 2021-01-18T08:43:14.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"fj",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-fr-fj
* source languages: fr
* target languages: fj
* OPUS readme: [fr-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.fj | 27.4 | 0.487 |
|
Helsinki-NLP/opus-mt-fr-gaa | 2021-01-18T08:43:19.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"gaa",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 43 | transformers | ---
tags:
- translation
---
### opus-mt-fr-gaa
* source languages: fr
* target languages: gaa
* OPUS readme: [fr-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.gaa | 27.8 | 0.473 |
|
Helsinki-NLP/opus-mt-fr-gil | 2021-01-18T08:43:25.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"gil",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 51 | transformers | ---
tags:
- translation
---
### opus-mt-fr-gil
* source languages: fr
* target languages: gil
* OPUS readme: [fr-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.gil | 27.9 | 0.499 |
|
Helsinki-NLP/opus-mt-fr-guw | 2021-01-18T08:43:30.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"guw",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 126 | transformers | ---
tags:
- translation
---
### opus-mt-fr-guw
* source languages: fr
* target languages: guw
* OPUS readme: [fr-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.guw | 31.4 | 0.505 |
|
Helsinki-NLP/opus-mt-fr-ha | 2021-01-18T08:43:36.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ha",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 40 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ha
* source languages: fr
* target languages: ha
* OPUS readme: [fr-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ha | 24.4 | 0.447 |
|
Helsinki-NLP/opus-mt-fr-he | 2021-01-18T08:43:43.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"he",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"special_tokens_map.json",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 57 | transformers | ---
language:
- fr
- he
tags:
- translation
license: apache-2.0
---
### fr-he
* source group: French
* target group: Hebrew
* OPUS readme: [fra-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md)
* model: transformer
* source language(s): fra
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.heb | 39.2 | 0.598 |
### System Info:
- hf_name: fr-he
- source_languages: fra
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'he']
- src_constituents: ('French', {'fra'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fra-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt
- src_alpha3: fra
- tgt_alpha3: heb
- chrF2_score: 0.598
- bleu: 39.2
- brevity_penalty: 1.0
- ref_len: 20655.0
- src_name: French
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: fr
- tgt_alpha2: he
- prefer_old: False
- short_pair: fr-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:02 |
Helsinki-NLP/opus-mt-fr-hil | 2021-01-18T08:43:48.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"hil",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 53 | transformers | ---
tags:
- translation
---
### opus-mt-fr-hil
* source languages: fr
* target languages: hil
* OPUS readme: [fr-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.hil | 34.7 | 0.559 |
|
Helsinki-NLP/opus-mt-fr-ho | 2021-01-18T08:43:53.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ho",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 40 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ho
* source languages: fr
* target languages: ho
* OPUS readme: [fr-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ho | 25.4 | 0.480 |
|
Helsinki-NLP/opus-mt-fr-hr | 2021-01-18T08:43:59.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"hr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 88 | transformers | ---
tags:
- translation
---
### opus-mt-fr-hr
* source languages: fr
* target languages: hr
* OPUS readme: [fr-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.hr | 20.7 | 0.442 |
|
Helsinki-NLP/opus-mt-fr-ht | 2021-01-18T08:44:03.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ht",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 41 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ht
* source languages: fr
* target languages: ht
* OPUS readme: [fr-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ht | 29.2 | 0.461 |
|
Helsinki-NLP/opus-mt-fr-hu | 2021-01-18T08:44:08.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"hu",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 48 | transformers | ---
tags:
- translation
---
### opus-mt-fr-hu
* source languages: fr
* target languages: hu
* OPUS readme: [fr-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.hu | 41.3 | 0.629 |
|
Helsinki-NLP/opus-mt-fr-id | 2021-01-18T08:44:14.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"id",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 47 | transformers | ---
tags:
- translation
---
### opus-mt-fr-id
* source languages: fr
* target languages: id
* OPUS readme: [fr-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-id/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.id | 37.2 | 0.636 |
|
Helsinki-NLP/opus-mt-fr-ig | 2021-01-18T08:44:19.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ig",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ig
* source languages: fr
* target languages: ig
* OPUS readme: [fr-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ig | 29.0 | 0.445 |
|
Helsinki-NLP/opus-mt-fr-ilo | 2021-01-18T08:44:25.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ilo",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 38 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ilo
* source languages: fr
* target languages: ilo
* OPUS readme: [fr-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ilo | 30.6 | 0.528 |
|
Helsinki-NLP/opus-mt-fr-iso | 2021-01-18T08:44:31.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"iso",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-fr-iso
* source languages: fr
* target languages: iso
* OPUS readme: [fr-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.iso | 26.7 | 0.429 |
|
Helsinki-NLP/opus-mt-fr-kg | 2021-01-18T08:44:36.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"kg",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-fr-kg
* source languages: fr
* target languages: kg
* OPUS readme: [fr-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kg | 30.4 | 0.523 |
|
Helsinki-NLP/opus-mt-fr-kqn | 2021-01-18T08:44:41.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"kqn",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 60 | transformers | ---
tags:
- translation
---
### opus-mt-fr-kqn
* source languages: fr
* target languages: kqn
* OPUS readme: [fr-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kqn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kqn | 23.3 | 0.469 |
|
Helsinki-NLP/opus-mt-fr-kwy | 2021-01-18T08:44:47.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"kwy",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 38 | transformers | ---
tags:
- translation
---
### opus-mt-fr-kwy
* source languages: fr
* target languages: kwy
* OPUS readme: [fr-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kwy/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kwy | 22.5 | 0.428 |
|
Helsinki-NLP/opus-mt-fr-lg | 2021-01-18T08:44:52.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"lg",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 30 | transformers | ---
tags:
- translation
---
### opus-mt-fr-lg
* source languages: fr
* target languages: lg
* OPUS readme: [fr-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lg | 21.7 | 0.454 |
|
Helsinki-NLP/opus-mt-fr-ln | 2021-01-18T08:44:57.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ln",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 63 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ln
* source languages: fr
* target languages: ln
* OPUS readme: [fr-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ln | 30.5 | 0.527 |
|
Helsinki-NLP/opus-mt-fr-loz | 2021-01-18T08:45:03.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"loz",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 45 | transformers | ---
tags:
- translation
---
### opus-mt-fr-loz
* source languages: fr
* target languages: loz
* OPUS readme: [fr-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.loz | 30.0 | 0.498 |
|
Helsinki-NLP/opus-mt-fr-lu | 2021-01-18T08:45:08.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"lu",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 43 | transformers | ---
tags:
- translation
---
### opus-mt-fr-lu
* source languages: fr
* target languages: lu
* OPUS readme: [fr-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lu | 25.5 | 0.471 |
|
Helsinki-NLP/opus-mt-fr-lua | 2021-01-18T08:45:13.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"lua",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 42 | transformers | ---
tags:
- translation
---
### opus-mt-fr-lua
* source languages: fr
* target languages: lua
* OPUS readme: [fr-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lua | 27.3 | 0.496 |
|
Helsinki-NLP/opus-mt-fr-lue | 2021-01-18T08:45:19.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"lue",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 48 | transformers | ---
tags:
- translation
---
### opus-mt-fr-lue
* source languages: fr
* target languages: lue
* OPUS readme: [fr-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lue/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lue | 23.1 | 0.485 |
|
Helsinki-NLP/opus-mt-fr-lus | 2021-01-18T08:45:24.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"lus",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 61 | transformers | ---
tags:
- translation
---
### opus-mt-fr-lus
* source languages: fr
* target languages: lus
* OPUS readme: [fr-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lus/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lus | 25.5 | 0.455 |
|
Helsinki-NLP/opus-mt-fr-mfe | 2021-01-18T08:45:29.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"mfe",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 45 | transformers | ---
tags:
- translation
---
### opus-mt-fr-mfe
* source languages: fr
* target languages: mfe
* OPUS readme: [fr-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mfe/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mfe | 26.1 | 0.451 |
|
Helsinki-NLP/opus-mt-fr-mh | 2021-01-18T08:45:34.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"mh",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-fr-mh
* source languages: fr
* target languages: mh
* OPUS readme: [fr-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mh | 21.7 | 0.399 |
|
Helsinki-NLP/opus-mt-fr-mos | 2021-01-18T08:45:39.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"mos",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 63 | transformers | ---
tags:
- translation
---
### opus-mt-fr-mos
* source languages: fr
* target languages: mos
* OPUS readme: [fr-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mos/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mos | 21.1 | 0.353 |
|
Helsinki-NLP/opus-mt-fr-ms | 2021-01-18T08:45:45.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ms",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 36 | transformers | ---
language:
- fr
- ms
tags:
- translation
license: apache-2.0
---
### fra-msa
* source group: French
* target group: Malay (macrolanguage)
* OPUS readme: [fra-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.msa | 35.3 | 0.617 |
### System Info:
- hf_name: fra-msa
- source_languages: fra
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'ms']
- src_constituents: {'fra'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: msa
- short_pair: fr-ms
- chrF2_score: 0.617
- bleu: 35.3
- brevity_penalty: 0.978
- ref_len: 6696.0
- src_name: French
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: ms
- prefer_old: False
- long_pair: fra-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-mt | 2021-01-18T08:45:50.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"mt",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-fr-mt
* source languages: fr
* target languages: mt
* OPUS readme: [fr-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mt | 28.7 | 0.466 |
|
Helsinki-NLP/opus-mt-fr-niu | 2021-01-18T08:45:56.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"niu",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 39 | transformers | ---
tags:
- translation
---
### opus-mt-fr-niu
* source languages: fr
* target languages: niu
* OPUS readme: [fr-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.niu | 34.5 | 0.537 |
|
Helsinki-NLP/opus-mt-fr-no | 2021-01-18T08:45:59.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"no",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 189 | transformers | ---
language:
- fr
- no
tags:
- translation
license: apache-2.0
---
### fra-nor
* source group: French
* target group: Norwegian
* OPUS readme: [fra-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.nor | 36.1 | 0.555 |
### System Info:
- hf_name: fra-nor
- source_languages: fra
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'no']
- src_constituents: {'fra'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: nor
- short_pair: fr-no
- chrF2_score: 0.555
- bleu: 36.1
- brevity_penalty: 0.981
- ref_len: 3089.0
- src_name: French
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: no
- prefer_old: False
- long_pair: fra-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-nso | 2021-01-18T08:46:05.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"nso",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 68 | transformers | ---
tags:
- translation
---
### opus-mt-fr-nso
* source languages: fr
* target languages: nso
* OPUS readme: [fr-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.nso | 33.3 | 0.527 |
|
Helsinki-NLP/opus-mt-fr-ny | 2021-01-18T08:46:09.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ny",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 50 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ny
* source languages: fr
* target languages: ny
* OPUS readme: [fr-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ny | 23.2 | 0.481 |
|
Helsinki-NLP/opus-mt-fr-pag | 2021-01-18T08:46:15.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"pag",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 42 | transformers | ---
tags:
- translation
---
### opus-mt-fr-pag
* source languages: fr
* target languages: pag
* OPUS readme: [fr-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pag | 27.0 | 0.486 |
|
Helsinki-NLP/opus-mt-fr-pap | 2021-01-18T08:46:20.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"pap",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 59 | transformers | ---
tags:
- translation
---
### opus-mt-fr-pap
* source languages: fr
* target languages: pap
* OPUS readme: [fr-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pap | 27.8 | 0.464 |
|
Helsinki-NLP/opus-mt-fr-pis | 2021-01-18T08:46:24.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"pis",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 44 | transformers | ---
tags:
- translation
---
### opus-mt-fr-pis
* source languages: fr
* target languages: pis
* OPUS readme: [fr-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pis | 29.0 | 0.486 |
|
Helsinki-NLP/opus-mt-fr-pl | 2021-01-18T08:46:29.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"pl",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 140 | transformers | ---
tags:
- translation
---
### opus-mt-fr-pl
* source languages: fr
* target languages: pl
* OPUS readme: [fr-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.pl | 40.7 | 0.625 |
|
Helsinki-NLP/opus-mt-fr-pon | 2021-01-18T08:46:34.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"pon",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 59 | transformers | ---
tags:
- translation
---
### opus-mt-fr-pon
* source languages: fr
* target languages: pon
* OPUS readme: [fr-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pon | 23.9 | 0.458 |
|
Helsinki-NLP/opus-mt-fr-rnd | 2021-01-18T08:46:39.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"rnd",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 58 | transformers | ---
tags:
- translation
---
### opus-mt-fr-rnd
* source languages: fr
* target languages: rnd
* OPUS readme: [fr-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-rnd/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.rnd | 21.8 | 0.431 |
|
Helsinki-NLP/opus-mt-fr-ro | 2021-01-18T08:46:44.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ro",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 129 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ro
* source languages: fr
* target languages: ro
* OPUS readme: [fr-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ro | 42.1 | 0.640 |
|
Helsinki-NLP/opus-mt-fr-ru | 2021-01-18T08:46:51.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ru",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 148 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ru
* source languages: fr
* target languages: ru
* OPUS readme: [fr-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ru | 37.9 | 0.585 |
|
Helsinki-NLP/opus-mt-fr-run | 2021-01-18T08:46:56.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"run",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 63 | transformers | ---
tags:
- translation
---
### opus-mt-fr-run
* source languages: fr
* target languages: run
* OPUS readme: [fr-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.run | 23.8 | 0.482 |
|
Helsinki-NLP/opus-mt-fr-rw | 2021-01-18T08:47:01.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"rw",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 50 | transformers | ---
tags:
- translation
---
### opus-mt-fr-rw
* source languages: fr
* target languages: rw
* OPUS readme: [fr-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.rw | 25.5 | 0.483 |
|
Helsinki-NLP/opus-mt-fr-sg | 2021-01-18T08:47:07.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"sg",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 76 | transformers | ---
tags:
- translation
---
### opus-mt-fr-sg
* source languages: fr
* target languages: sg
* OPUS readme: [fr-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sg | 29.7 | 0.473 |
|
Helsinki-NLP/opus-mt-fr-sk | 2021-01-18T08:47:12.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"sk",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 118 | transformers | ---
tags:
- translation
---
### opus-mt-fr-sk
* source languages: fr
* target languages: sk
* OPUS readme: [fr-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sk | 24.9 | 0.456 |
|
Helsinki-NLP/opus-mt-fr-sl | 2021-01-18T08:47:18.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"sl",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 63 | transformers | ---
tags:
- translation
---
### opus-mt-fr-sl
* source languages: fr
* target languages: sl
* OPUS readme: [fr-sl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sl | 20.1 | 0.433 |
|
Helsinki-NLP/opus-mt-fr-sm | 2021-01-18T08:47:24.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"sm",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 46 | transformers | ---
tags:
- translation
---
### opus-mt-fr-sm
* source languages: fr
* target languages: sm
* OPUS readme: [fr-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sm | 28.8 | 0.474 |
|
Helsinki-NLP/opus-mt-fr-sn | 2021-01-18T08:47:29.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"sn",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-fr-sn
* source languages: fr
* target languages: sn
* OPUS readme: [fr-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sn | 23.4 | 0.507 |
|
Helsinki-NLP/opus-mt-fr-srn | 2021-01-18T08:47:34.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"srn",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 44 | transformers | ---
tags:
- translation
---
### opus-mt-fr-srn
* source languages: fr
* target languages: srn
* OPUS readme: [fr-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.srn | 27.4 | 0.459 |
|
Helsinki-NLP/opus-mt-fr-st | 2021-01-18T08:47:39.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"st",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 53 | transformers | ---
tags:
- translation
---
### opus-mt-fr-st
* source languages: fr
* target languages: st
* OPUS readme: [fr-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.st | 34.6 | 0.540 |
|
Helsinki-NLP/opus-mt-fr-sv | 2021-01-18T08:47:45.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-fr-sv
* source languages: fr
* target languages: sv
* OPUS readme: [fr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.sv | 60.1 | 0.744 |
|
Helsinki-NLP/opus-mt-fr-swc | 2021-01-18T08:47:50.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"swc",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 59 | transformers | ---
tags:
- translation
---
### opus-mt-fr-swc
* source languages: fr
* target languages: swc
* OPUS readme: [fr-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.swc | 28.2 | 0.499 |
|
Helsinki-NLP/opus-mt-fr-tiv | 2021-01-18T08:47:56.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tiv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 71 | transformers | ---
tags:
- translation
---
### opus-mt-fr-tiv
* source languages: fr
* target languages: tiv
* OPUS readme: [fr-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tiv | 23.5 | 0.406 |
|
Helsinki-NLP/opus-mt-fr-tl | 2021-01-18T08:48:07.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tl",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 38 | transformers | ---
language:
- fr
- tl
tags:
- translation
license: apache-2.0
---
### fra-tgl
* source group: French
* target group: Tagalog
* OPUS readme: [fra-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.tgl | 24.1 | 0.536 |
### System Info:
- hf_name: fra-tgl
- source_languages: fra
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'tl']
- src_constituents: {'fra'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: tgl
- short_pair: fr-tl
- chrF2_score: 0.536
- bleu: 24.1
- brevity_penalty: 1.0
- ref_len: 5778.0
- src_name: French
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: tl
- prefer_old: False
- long_pair: fra-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-tll | 2021-01-18T08:48:14.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tll",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-fr-tll
* source languages: fr
* target languages: tll
* OPUS readme: [fr-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tll | 24.6 | 0.467 |
|
Helsinki-NLP/opus-mt-fr-tn | 2021-01-18T08:48:21.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tn",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 47 | transformers | ---
tags:
- translation
---
### opus-mt-fr-tn
* source languages: fr
* target languages: tn
* OPUS readme: [fr-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tn | 33.1 | 0.525 |
|
Helsinki-NLP/opus-mt-fr-to | 2021-01-18T08:48:27.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"to",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 75 | transformers | ---
tags:
- translation
---
### opus-mt-fr-to
* source languages: fr
* target languages: to
* OPUS readme: [fr-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.to | 37.0 | 0.518 |
|
Helsinki-NLP/opus-mt-fr-tpi | 2021-01-18T08:48:33.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tpi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 57 | transformers | ---
tags:
- translation
---
### opus-mt-fr-tpi
* source languages: fr
* target languages: tpi
* OPUS readme: [fr-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tpi | 30.0 | 0.487 |
|
Helsinki-NLP/opus-mt-fr-ts | 2021-01-18T08:48:40.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ts",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 83 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ts
* source languages: fr
* target languages: ts
* OPUS readme: [fr-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ts/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ts | 31.4 | 0.525 |
|
Helsinki-NLP/opus-mt-fr-tum | 2021-01-18T08:48:47.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tum",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 67 | transformers | ---
tags:
- translation
---
### opus-mt-fr-tum
* source languages: fr
* target languages: tum
* OPUS readme: [fr-tum](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tum/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tum | 23.0 | 0.458 |
|
Helsinki-NLP/opus-mt-fr-tvl | 2021-01-18T08:48:54.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tvl",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-fr-tvl
* source languages: fr
* target languages: tvl
* OPUS readme: [fr-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tvl | 32.6 | 0.497 |
|
Helsinki-NLP/opus-mt-fr-tw | 2021-01-18T08:49:00.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"tw",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-fr-tw
* source languages: fr
* target languages: tw
* OPUS readme: [fr-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tw | 27.9 | 0.469 |
|
Helsinki-NLP/opus-mt-fr-ty | 2021-01-18T08:49:06.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ty",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 57 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ty
* source languages: fr
* target languages: ty
* OPUS readme: [fr-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ty | 39.6 | 0.561 |
|
Helsinki-NLP/opus-mt-fr-uk | 2021-01-18T08:49:13.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"uk",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-fr-uk
* source languages: fr
* target languages: uk
* OPUS readme: [fr-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.uk | 39.4 | 0.581 |
|
Helsinki-NLP/opus-mt-fr-ve | 2021-01-18T08:49:19.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"ve",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 50 | transformers | ---
tags:
- translation
---
### opus-mt-fr-ve
* source languages: fr
* target languages: ve
* OPUS readme: [fr-ve](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ve/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ve | 26.3 | 0.481 |
|
Helsinki-NLP/opus-mt-fr-vi | 2021-01-18T08:49:25.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 29 | transformers | ---
language:
- fr
- vi
tags:
- translation
license: apache-2.0
---
### fra-vie
* source group: French
* target group: Vietnamese
* OPUS readme: [fra-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.vie | 31.1 | 0.486 |
### System Info:
- hf_name: fra-vie
- source_languages: fra
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'vi']
- src_constituents: {'fra'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: vie
- short_pair: fr-vi
- chrF2_score: 0.486
- bleu: 31.1
- brevity_penalty: 0.985
- ref_len: 13219.0
- src_name: French
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: vi
- prefer_old: False
- long_pair: fra-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-war | 2021-01-18T08:49:31.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"war",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 53 | transformers | ---
tags:
- translation
---
### opus-mt-fr-war
* source languages: fr
* target languages: war
* OPUS readme: [fr-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.war | 33.7 | 0.538 |
|
Helsinki-NLP/opus-mt-fr-wls | 2021-01-18T08:49:36.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"wls",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-fr-wls
* source languages: fr
* target languages: wls
* OPUS readme: [fr-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.wls | 27.5 | 0.478 |
|
Helsinki-NLP/opus-mt-fr-xh | 2021-01-18T08:49:42.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"xh",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-fr-xh
* source languages: fr
* target languages: xh
* OPUS readme: [fr-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.xh | 25.1 | 0.523 |
|
Helsinki-NLP/opus-mt-fr-yap | 2021-01-18T08:49:48.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"yap",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 56 | transformers | ---
tags:
- translation
---
### opus-mt-fr-yap
* source languages: fr
* target languages: yap
* OPUS readme: [fr-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yap | 25.8 | 0.434 |
|
Helsinki-NLP/opus-mt-fr-yo | 2021-01-18T08:49:54.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"yo",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 70 | transformers | ---
tags:
- translation
---
### opus-mt-fr-yo
* source languages: fr
* target languages: yo
* OPUS readme: [fr-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yo | 25.9 | 0.415 |
|
Helsinki-NLP/opus-mt-fr-zne | 2021-01-18T08:50:00.000Z | [
"pytorch",
"marian",
"seq2seq",
"fr",
"zne",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-fr-zne
* source languages: fr
* target languages: zne
* OPUS readme: [fr-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-zne/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.zne | 24.1 | 0.460 |
|
Helsinki-NLP/opus-mt-fse-fi | 2021-01-18T08:50:05.000Z | [
"pytorch",
"marian",
"seq2seq",
"fse",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 48 | transformers | ---
tags:
- translation
---
### opus-mt-fse-fi
* source languages: fse
* target languages: fi
* OPUS readme: [fse-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fse-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fse.fi | 90.2 | 0.943 |
|
Helsinki-NLP/opus-mt-ga-en | 2021-01-18T08:50:12.000Z | [
"pytorch",
"marian",
"seq2seq",
"ga",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 388 | transformers | ---
language:
- ga
- en
tags:
- translation
license: apache-2.0
---
### gle-eng
* source group: Irish
* target group: English
* OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md)
* model: transformer-align
* source language(s): gle
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gle.eng | 51.6 | 0.672 |
### System Info:
- hf_name: gle-eng
- source_languages: gle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ga', 'en']
- src_constituents: {'gle'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt
- src_alpha3: gle
- tgt_alpha3: eng
- short_pair: ga-en
- chrF2_score: 0.672
- bleu: 51.6
- brevity_penalty: 1.0
- ref_len: 11247.0
- src_name: Irish
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ga
- tgt_alpha2: en
- prefer_old: False
- long_pair: gle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gaa-de | 2021-01-18T08:50:17.000Z | [
"pytorch",
"marian",
"seq2seq",
"gaa",
"de",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 55 | transformers | ---
tags:
- translation
---
### opus-mt-gaa-de
* source languages: gaa
* target languages: de
* OPUS readme: [gaa-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.de | 23.3 | 0.438 |
|
Helsinki-NLP/opus-mt-gaa-en | 2021-01-18T08:51:18.000Z | [
"pytorch",
"marian",
"seq2seq",
"gaa",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 71 | transformers | ---
tags:
- translation
---
### opus-mt-gaa-en
* source languages: gaa
* target languages: en
* OPUS readme: [gaa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.en | 41.0 | 0.567 |
|
Helsinki-NLP/opus-mt-gaa-es | 2021-01-18T08:51:32.000Z | [
"pytorch",
"marian",
"seq2seq",
"gaa",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 47 | transformers | ---
tags:
- translation
---
### opus-mt-gaa-es
* source languages: gaa
* target languages: es
* OPUS readme: [gaa-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.es | 28.6 | 0.463 |
|
Helsinki-NLP/opus-mt-gaa-fi | 2021-01-18T08:51:38.000Z | [
"pytorch",
"marian",
"seq2seq",
"gaa",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 42 | transformers | ---
tags:
- translation
---
### opus-mt-gaa-fi
* source languages: gaa
* target languages: fi
* OPUS readme: [gaa-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fi | 26.4 | 0.498 |
|
Helsinki-NLP/opus-mt-gaa-fr | 2021-01-18T08:51:44.000Z | [
"pytorch",
"marian",
"seq2seq",
"gaa",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 53 | transformers | ---
tags:
- translation
---
### opus-mt-gaa-fr
* source languages: gaa
* target languages: fr
* OPUS readme: [gaa-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fr | 27.8 | 0.455 |
|
Helsinki-NLP/opus-mt-gaa-sv | 2021-01-18T08:51:50.000Z | [
"pytorch",
"marian",
"seq2seq",
"gaa",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 45 | transformers | ---
tags:
- translation
---
### opus-mt-gaa-sv
* source languages: gaa
* target languages: sv
* OPUS readme: [gaa-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.sv | 30.1 | 0.489 |
|
Helsinki-NLP/opus-mt-gem-en | 2021-01-18T08:51:56.000Z | [
"pytorch",
"marian",
"seq2seq",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 30 | transformers | ---
language:
- da
- sv
- af
- nn
- fy
- fo
- de
- nb
- nl
- is
- en
- lb
- yi
- gem
tags:
- translation
license: apache-2.0
---
### gem-eng
* source group: Germanic languages
* target group: English
* OPUS readme: [gem-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.542 |
| news-test2008-deueng.deu.eng | 26.3 | 0.536 |
| newstest2009-deueng.deu.eng | 25.1 | 0.531 |
| newstest2010-deueng.deu.eng | 28.3 | 0.569 |
| newstest2011-deueng.deu.eng | 26.0 | 0.543 |
| newstest2012-deueng.deu.eng | 26.8 | 0.550 |
| newstest2013-deueng.deu.eng | 30.2 | 0.570 |
| newstest2014-deen-deueng.deu.eng | 30.7 | 0.574 |
| newstest2015-ende-deueng.deu.eng | 32.1 | 0.581 |
| newstest2016-ende-deueng.deu.eng | 36.9 | 0.624 |
| newstest2017-ende-deueng.deu.eng | 32.8 | 0.588 |
| newstest2018-ende-deueng.deu.eng | 40.2 | 0.640 |
| newstest2019-deen-deueng.deu.eng | 36.8 | 0.614 |
| Tatoeba-test.afr-eng.afr.eng | 62.8 | 0.758 |
| Tatoeba-test.ang-eng.ang.eng | 10.5 | 0.262 |
| Tatoeba-test.dan-eng.dan.eng | 61.6 | 0.754 |
| Tatoeba-test.deu-eng.deu.eng | 49.7 | 0.665 |
| Tatoeba-test.enm-eng.enm.eng | 23.9 | 0.491 |
| Tatoeba-test.fao-eng.fao.eng | 23.4 | 0.446 |
| Tatoeba-test.frr-eng.frr.eng | 10.2 | 0.184 |
| Tatoeba-test.fry-eng.fry.eng | 29.6 | 0.486 |
| Tatoeba-test.gos-eng.gos.eng | 17.8 | 0.352 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.058 |
| Tatoeba-test.gsw-eng.gsw.eng | 15.3 | 0.333 |
| Tatoeba-test.isl-eng.isl.eng | 51.0 | 0.669 |
| Tatoeba-test.ksh-eng.ksh.eng | 6.7 | 0.266 |
| Tatoeba-test.ltz-eng.ltz.eng | 33.0 | 0.505 |
| Tatoeba-test.multi.eng | 54.0 | 0.687 |
| Tatoeba-test.nds-eng.nds.eng | 33.6 | 0.529 |
| Tatoeba-test.nld-eng.nld.eng | 58.9 | 0.733 |
| Tatoeba-test.non-eng.non.eng | 37.3 | 0.546 |
| Tatoeba-test.nor-eng.nor.eng | 54.9 | 0.696 |
| Tatoeba-test.pdc-eng.pdc.eng | 29.6 | 0.446 |
| Tatoeba-test.sco-eng.sco.eng | 40.5 | 0.581 |
| Tatoeba-test.stq-eng.stq.eng | 14.5 | 0.361 |
| Tatoeba-test.swe-eng.swe.eng | 62.0 | 0.745 |
| Tatoeba-test.swg-eng.swg.eng | 17.1 | 0.334 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.400 |
### System Info:
- hf_name: gem-eng
- source_languages: gem
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gem
- tgt_alpha3: eng
- short_pair: gem-en
- chrF2_score: 0.687
- bleu: 54.0
- brevity_penalty: 0.993
- ref_len: 72120.0
- src_name: Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gem
- tgt_alpha2: en
- prefer_old: False
- long_pair: gem-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gem-gem | 2021-01-18T08:52:02.000Z | [
"pytorch",
"marian",
"seq2seq",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 42 | transformers | ---
language:
- da
- sv
- af
- nn
- fy
- fo
- de
- nb
- nl
- is
- en
- lb
- yi
- gem
tags:
- translation
license: apache-2.0
---
### gem-gem
* source group: Germanic languages
* target group: Germanic languages
* OPUS readme: [gem-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 24.5 | 0.519 |
| newssyscomb2009-engdeu.eng.deu | 18.7 | 0.495 |
| news-test2008-deueng.deu.eng | 22.8 | 0.509 |
| news-test2008-engdeu.eng.deu | 18.6 | 0.485 |
| newstest2009-deueng.deu.eng | 22.2 | 0.507 |
| newstest2009-engdeu.eng.deu | 18.3 | 0.491 |
| newstest2010-deueng.deu.eng | 24.8 | 0.537 |
| newstest2010-engdeu.eng.deu | 19.7 | 0.499 |
| newstest2011-deueng.deu.eng | 22.9 | 0.516 |
| newstest2011-engdeu.eng.deu | 18.3 | 0.485 |
| newstest2012-deueng.deu.eng | 23.9 | 0.524 |
| newstest2012-engdeu.eng.deu | 18.5 | 0.484 |
| newstest2013-deueng.deu.eng | 26.3 | 0.537 |
| newstest2013-engdeu.eng.deu | 21.5 | 0.506 |
| newstest2014-deen-deueng.deu.eng | 25.7 | 0.535 |
| newstest2015-ende-deueng.deu.eng | 27.3 | 0.542 |
| newstest2015-ende-engdeu.eng.deu | 24.2 | 0.534 |
| newstest2016-ende-deueng.deu.eng | 31.8 | 0.584 |
| newstest2016-ende-engdeu.eng.deu | 28.4 | 0.564 |
| newstest2017-ende-deueng.deu.eng | 27.6 | 0.545 |
| newstest2017-ende-engdeu.eng.deu | 22.8 | 0.527 |
| newstest2018-ende-deueng.deu.eng | 34.1 | 0.593 |
| newstest2018-ende-engdeu.eng.deu | 32.7 | 0.595 |
| newstest2019-deen-deueng.deu.eng | 30.6 | 0.565 |
| newstest2019-ende-engdeu.eng.deu | 29.5 | 0.567 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.053 |
| Tatoeba-test.afr-dan.afr.dan | 57.8 | 0.907 |
| Tatoeba-test.afr-deu.afr.deu | 46.4 | 0.663 |
| Tatoeba-test.afr-eng.afr.eng | 57.4 | 0.717 |
| Tatoeba-test.afr-enm.afr.enm | 11.3 | 0.285 |
| Tatoeba-test.afr-fry.afr.fry | 0.0 | 0.167 |
| Tatoeba-test.afr-gos.afr.gos | 1.5 | 0.178 |
| Tatoeba-test.afr-isl.afr.isl | 29.0 | 0.760 |
| Tatoeba-test.afr-ltz.afr.ltz | 11.2 | 0.246 |
| Tatoeba-test.afr-nld.afr.nld | 53.3 | 0.708 |
| Tatoeba-test.afr-nor.afr.nor | 66.0 | 0.752 |
| Tatoeba-test.afr-swe.afr.swe | 88.0 | 0.955 |
| Tatoeba-test.afr-yid.afr.yid | 59.5 | 0.443 |
| Tatoeba-test.ang-afr.ang.afr | 10.7 | 0.043 |
| Tatoeba-test.ang-dan.ang.dan | 6.3 | 0.190 |
| Tatoeba-test.ang-deu.ang.deu | 1.4 | 0.212 |
| Tatoeba-test.ang-eng.ang.eng | 8.1 | 0.247 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.196 |
| Tatoeba-test.ang-fao.ang.fao | 10.7 | 0.105 |
| Tatoeba-test.ang-gos.ang.gos | 10.7 | 0.128 |
| Tatoeba-test.ang-isl.ang.isl | 16.0 | 0.135 |
| Tatoeba-test.ang-ltz.ang.ltz | 16.0 | 0.121 |
| Tatoeba-test.ang-yid.ang.yid | 1.5 | 0.136 |
| Tatoeba-test.dan-afr.dan.afr | 22.7 | 0.655 |
| Tatoeba-test.dan-ang.dan.ang | 3.1 | 0.110 |
| Tatoeba-test.dan-deu.dan.deu | 47.4 | 0.676 |
| Tatoeba-test.dan-eng.dan.eng | 54.7 | 0.704 |
| Tatoeba-test.dan-enm.dan.enm | 4.8 | 0.291 |
| Tatoeba-test.dan-fao.dan.fao | 9.7 | 0.120 |
| Tatoeba-test.dan-gos.dan.gos | 3.8 | 0.240 |
| Tatoeba-test.dan-isl.dan.isl | 66.1 | 0.678 |
| Tatoeba-test.dan-ltz.dan.ltz | 78.3 | 0.563 |
| Tatoeba-test.dan-nds.dan.nds | 6.2 | 0.335 |
| Tatoeba-test.dan-nld.dan.nld | 60.0 | 0.748 |
| Tatoeba-test.dan-nor.dan.nor | 68.1 | 0.812 |
| Tatoeba-test.dan-swe.dan.swe | 65.0 | 0.785 |
| Tatoeba-test.dan-swg.dan.swg | 2.6 | 0.182 |
| Tatoeba-test.dan-yid.dan.yid | 9.3 | 0.226 |
| Tatoeba-test.deu-afr.deu.afr | 50.3 | 0.682 |
| Tatoeba-test.deu-ang.deu.ang | 0.5 | 0.118 |
| Tatoeba-test.deu-dan.deu.dan | 49.6 | 0.679 |
| Tatoeba-test.deu-eng.deu.eng | 43.4 | 0.618 |
| Tatoeba-test.deu-enm.deu.enm | 2.2 | 0.159 |
| Tatoeba-test.deu-frr.deu.frr | 0.4 | 0.156 |
| Tatoeba-test.deu-fry.deu.fry | 10.7 | 0.355 |
| Tatoeba-test.deu-gos.deu.gos | 0.7 | 0.183 |
| Tatoeba-test.deu-got.deu.got | 0.3 | 0.010 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.1 | 0.130 |
| Tatoeba-test.deu-isl.deu.isl | 24.3 | 0.504 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.9 | 0.173 |
| Tatoeba-test.deu-ltz.deu.ltz | 15.6 | 0.304 |
| Tatoeba-test.deu-nds.deu.nds | 21.2 | 0.469 |
| Tatoeba-test.deu-nld.deu.nld | 47.1 | 0.657 |
| Tatoeba-test.deu-nor.deu.nor | 43.9 | 0.646 |
| Tatoeba-test.deu-pdc.deu.pdc | 3.0 | 0.133 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.296 |
| Tatoeba-test.deu-stq.deu.stq | 0.6 | 0.137 |
| Tatoeba-test.deu-swe.deu.swe | 50.6 | 0.668 |
| Tatoeba-test.deu-swg.deu.swg | 0.2 | 0.137 |
| Tatoeba-test.deu-yid.deu.yid | 3.9 | 0.229 |
| Tatoeba-test.eng-afr.eng.afr | 55.2 | 0.721 |
| Tatoeba-test.eng-ang.eng.ang | 4.9 | 0.118 |
| Tatoeba-test.eng-dan.eng.dan | 52.6 | 0.684 |
| Tatoeba-test.eng-deu.eng.deu | 35.4 | 0.573 |
| Tatoeba-test.eng-enm.eng.enm | 1.8 | 0.223 |
| Tatoeba-test.eng-fao.eng.fao | 7.0 | 0.312 |
| Tatoeba-test.eng-frr.eng.frr | 1.2 | 0.050 |
| Tatoeba-test.eng-fry.eng.fry | 15.8 | 0.381 |
| Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.170 |
| Tatoeba-test.eng-got.eng.got | 0.3 | 0.011 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.5 | 0.126 |
| Tatoeba-test.eng-isl.eng.isl | 20.9 | 0.463 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.0 | 0.141 |
| Tatoeba-test.eng-ltz.eng.ltz | 12.8 | 0.292 |
| Tatoeba-test.eng-nds.eng.nds | 18.3 | 0.428 |
| Tatoeba-test.eng-nld.eng.nld | 47.3 | 0.657 |
| Tatoeba-test.eng-non.eng.non | 0.3 | 0.145 |
| Tatoeba-test.eng-nor.eng.nor | 47.2 | 0.650 |
| Tatoeba-test.eng-pdc.eng.pdc | 4.8 | 0.177 |
| Tatoeba-test.eng-sco.eng.sco | 38.1 | 0.597 |
| Tatoeba-test.eng-stq.eng.stq | 2.4 | 0.288 |
| Tatoeba-test.eng-swe.eng.swe | 52.7 | 0.677 |
| Tatoeba-test.eng-swg.eng.swg | 1.1 | 0.163 |
| Tatoeba-test.eng-yid.eng.yid | 4.5 | 0.223 |
| Tatoeba-test.enm-afr.enm.afr | 22.8 | 0.401 |
| Tatoeba-test.enm-ang.enm.ang | 0.4 | 0.062 |
| Tatoeba-test.enm-dan.enm.dan | 51.4 | 0.782 |
| Tatoeba-test.enm-deu.enm.deu | 33.8 | 0.473 |
| Tatoeba-test.enm-eng.enm.eng | 22.4 | 0.495 |
| Tatoeba-test.enm-fry.enm.fry | 16.0 | 0.173 |
| Tatoeba-test.enm-gos.enm.gos | 6.1 | 0.222 |
| Tatoeba-test.enm-isl.enm.isl | 59.5 | 0.651 |
| Tatoeba-test.enm-ksh.enm.ksh | 10.5 | 0.130 |
| Tatoeba-test.enm-nds.enm.nds | 18.1 | 0.327 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.546 |
| Tatoeba-test.enm-nor.enm.nor | 15.6 | 0.290 |
| Tatoeba-test.enm-yid.enm.yid | 2.3 | 0.215 |
| Tatoeba-test.fao-ang.fao.ang | 2.1 | 0.035 |
| Tatoeba-test.fao-dan.fao.dan | 53.7 | 0.625 |
| Tatoeba-test.fao-eng.fao.eng | 24.7 | 0.435 |
| Tatoeba-test.fao-gos.fao.gos | 12.7 | 0.116 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.341 |
| Tatoeba-test.fao-nor.fao.nor | 41.9 | 0.586 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 1.000 |
| Tatoeba-test.frr-deu.frr.deu | 7.4 | 0.263 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.157 |
| Tatoeba-test.frr-fry.frr.fry | 4.0 | 0.112 |
| Tatoeba-test.frr-gos.frr.gos | 1.0 | 0.135 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.207 |
| Tatoeba-test.frr-nld.frr.nld | 10.6 | 0.227 |
| Tatoeba-test.frr-stq.frr.stq | 1.0 | 0.058 |
| Tatoeba-test.fry-afr.fry.afr | 12.7 | 0.333 |
| Tatoeba-test.fry-deu.fry.deu | 30.8 | 0.555 |
| Tatoeba-test.fry-eng.fry.eng | 31.2 | 0.506 |
| Tatoeba-test.fry-enm.fry.enm | 0.0 | 0.175 |
| Tatoeba-test.fry-frr.fry.frr | 1.6 | 0.091 |
| Tatoeba-test.fry-gos.fry.gos | 1.1 | 0.254 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.526 |
| Tatoeba-test.fry-nds.fry.nds | 12.4 | 0.116 |
| Tatoeba-test.fry-nld.fry.nld | 43.4 | 0.637 |
| Tatoeba-test.fry-nor.fry.nor | 47.1 | 0.607 |
| Tatoeba-test.fry-stq.fry.stq | 0.6 | 0.181 |
| Tatoeba-test.fry-swe.fry.swe | 30.2 | 0.587 |
| Tatoeba-test.fry-yid.fry.yid | 3.1 | 0.173 |
| Tatoeba-test.gos-afr.gos.afr | 1.8 | 0.215 |
| Tatoeba-test.gos-ang.gos.ang | 0.0 | 0.045 |
| Tatoeba-test.gos-dan.gos.dan | 4.1 | 0.236 |
| Tatoeba-test.gos-deu.gos.deu | 19.6 | 0.406 |
| Tatoeba-test.gos-eng.gos.eng | 15.1 | 0.329 |
| Tatoeba-test.gos-enm.gos.enm | 5.8 | 0.271 |
| Tatoeba-test.gos-fao.gos.fao | 19.0 | 0.136 |
| Tatoeba-test.gos-frr.gos.frr | 1.3 | 0.119 |
| Tatoeba-test.gos-fry.gos.fry | 17.1 | 0.388 |
| Tatoeba-test.gos-isl.gos.isl | 16.8 | 0.356 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.6 | 0.174 |
| Tatoeba-test.gos-nds.gos.nds | 4.7 | 0.225 |
| Tatoeba-test.gos-nld.gos.nld | 16.3 | 0.406 |
| Tatoeba-test.gos-stq.gos.stq | 0.7 | 0.154 |
| Tatoeba-test.gos-swe.gos.swe | 8.6 | 0.319 |
| Tatoeba-test.gos-yid.gos.yid | 4.4 | 0.165 |
| Tatoeba-test.got-deu.got.deu | 0.2 | 0.041 |
| Tatoeba-test.got-eng.got.eng | 0.2 | 0.068 |
| Tatoeba-test.got-nor.got.nor | 0.6 | 0.000 |
| Tatoeba-test.gsw-deu.gsw.deu | 15.9 | 0.373 |
| Tatoeba-test.gsw-eng.gsw.eng | 14.7 | 0.320 |
| Tatoeba-test.isl-afr.isl.afr | 38.0 | 0.641 |
| Tatoeba-test.isl-ang.isl.ang | 0.0 | 0.037 |
| Tatoeba-test.isl-dan.isl.dan | 67.7 | 0.836 |
| Tatoeba-test.isl-deu.isl.deu | 42.6 | 0.614 |
| Tatoeba-test.isl-eng.isl.eng | 43.5 | 0.610 |
| Tatoeba-test.isl-enm.isl.enm | 12.4 | 0.123 |
| Tatoeba-test.isl-fao.isl.fao | 15.6 | 0.176 |
| Tatoeba-test.isl-gos.isl.gos | 7.1 | 0.257 |
| Tatoeba-test.isl-nor.isl.nor | 53.5 | 0.690 |
| Tatoeba-test.isl-stq.isl.stq | 10.7 | 0.176 |
| Tatoeba-test.isl-swe.isl.swe | 67.7 | 0.818 |
| Tatoeba-test.ksh-deu.ksh.deu | 11.8 | 0.393 |
| Tatoeba-test.ksh-eng.ksh.eng | 4.0 | 0.239 |
| Tatoeba-test.ksh-enm.ksh.enm | 9.5 | 0.085 |
| Tatoeba-test.ltz-afr.ltz.afr | 36.5 | 0.529 |
| Tatoeba-test.ltz-ang.ltz.ang | 0.0 | 0.043 |
| Tatoeba-test.ltz-dan.ltz.dan | 80.6 | 0.722 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.1 | 0.581 |
| Tatoeba-test.ltz-eng.ltz.eng | 36.1 | 0.511 |
| Tatoeba-test.ltz-fry.ltz.fry | 16.5 | 0.524 |
| Tatoeba-test.ltz-gos.ltz.gos | 0.7 | 0.118 |
| Tatoeba-test.ltz-nld.ltz.nld | 40.4 | 0.535 |
| Tatoeba-test.ltz-nor.ltz.nor | 19.1 | 0.582 |
| Tatoeba-test.ltz-stq.ltz.stq | 2.4 | 0.093 |
| Tatoeba-test.ltz-swe.ltz.swe | 25.9 | 0.430 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.5 | 0.160 |
| Tatoeba-test.multi.multi | 42.7 | 0.614 |
| Tatoeba-test.nds-dan.nds.dan | 23.0 | 0.465 |
| Tatoeba-test.nds-deu.nds.deu | 39.8 | 0.610 |
| Tatoeba-test.nds-eng.nds.eng | 32.0 | 0.520 |
| Tatoeba-test.nds-enm.nds.enm | 3.9 | 0.156 |
| Tatoeba-test.nds-frr.nds.frr | 10.7 | 0.127 |
| Tatoeba-test.nds-fry.nds.fry | 10.7 | 0.231 |
| Tatoeba-test.nds-gos.nds.gos | 0.8 | 0.157 |
| Tatoeba-test.nds-nld.nds.nld | 44.1 | 0.634 |
| Tatoeba-test.nds-nor.nds.nor | 47.1 | 0.665 |
| Tatoeba-test.nds-swg.nds.swg | 0.5 | 0.166 |
| Tatoeba-test.nds-yid.nds.yid | 12.7 | 0.337 |
| Tatoeba-test.nld-afr.nld.afr | 58.4 | 0.748 |
| Tatoeba-test.nld-dan.nld.dan | 61.3 | 0.753 |
| Tatoeba-test.nld-deu.nld.deu | 48.2 | 0.670 |
| Tatoeba-test.nld-eng.nld.eng | 52.8 | 0.690 |
| Tatoeba-test.nld-enm.nld.enm | 5.7 | 0.178 |
| Tatoeba-test.nld-frr.nld.frr | 0.9 | 0.159 |
| Tatoeba-test.nld-fry.nld.fry | 23.0 | 0.467 |
| Tatoeba-test.nld-gos.nld.gos | 1.0 | 0.165 |
| Tatoeba-test.nld-ltz.nld.ltz | 14.4 | 0.310 |
| Tatoeba-test.nld-nds.nld.nds | 24.1 | 0.485 |
| Tatoeba-test.nld-nor.nld.nor | 53.6 | 0.705 |
| Tatoeba-test.nld-sco.nld.sco | 15.0 | 0.415 |
| Tatoeba-test.nld-stq.nld.stq | 0.5 | 0.183 |
| Tatoeba-test.nld-swe.nld.swe | 73.6 | 0.842 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.191 |
| Tatoeba-test.nld-yid.nld.yid | 9.4 | 0.299 |
| Tatoeba-test.non-eng.non.eng | 27.7 | 0.501 |
| Tatoeba-test.nor-afr.nor.afr | 48.2 | 0.687 |
| Tatoeba-test.nor-dan.nor.dan | 69.5 | 0.820 |
| Tatoeba-test.nor-deu.nor.deu | 41.1 | 0.634 |
| Tatoeba-test.nor-eng.nor.eng | 49.4 | 0.660 |
| Tatoeba-test.nor-enm.nor.enm | 6.8 | 0.230 |
| Tatoeba-test.nor-fao.nor.fao | 6.9 | 0.395 |
| Tatoeba-test.nor-fry.nor.fry | 9.2 | 0.323 |
| Tatoeba-test.nor-got.nor.got | 1.5 | 0.000 |
| Tatoeba-test.nor-isl.nor.isl | 34.5 | 0.555 |
| Tatoeba-test.nor-ltz.nor.ltz | 22.1 | 0.447 |
| Tatoeba-test.nor-nds.nor.nds | 34.3 | 0.565 |
| Tatoeba-test.nor-nld.nor.nld | 50.5 | 0.676 |
| Tatoeba-test.nor-nor.nor.nor | 57.6 | 0.764 |
| Tatoeba-test.nor-swe.nor.swe | 68.9 | 0.813 |
| Tatoeba-test.nor-yid.nor.yid | 65.0 | 0.627 |
| Tatoeba-test.pdc-deu.pdc.deu | 43.5 | 0.559 |
| Tatoeba-test.pdc-eng.pdc.eng | 26.1 | 0.471 |
| Tatoeba-test.sco-deu.sco.deu | 7.1 | 0.295 |
| Tatoeba-test.sco-eng.sco.eng | 34.4 | 0.551 |
| Tatoeba-test.sco-nld.sco.nld | 9.9 | 0.438 |
| Tatoeba-test.stq-deu.stq.deu | 8.6 | 0.385 |
| Tatoeba-test.stq-eng.stq.eng | 21.8 | 0.431 |
| Tatoeba-test.stq-frr.stq.frr | 2.1 | 0.111 |
| Tatoeba-test.stq-fry.stq.fry | 7.6 | 0.267 |
| Tatoeba-test.stq-gos.stq.gos | 0.7 | 0.198 |
| Tatoeba-test.stq-isl.stq.isl | 16.0 | 0.121 |
| Tatoeba-test.stq-ltz.stq.ltz | 3.8 | 0.150 |
| Tatoeba-test.stq-nld.stq.nld | 14.6 | 0.375 |
| Tatoeba-test.stq-yid.stq.yid | 2.4 | 0.096 |
| Tatoeba-test.swe-afr.swe.afr | 51.8 | 0.802 |
| Tatoeba-test.swe-dan.swe.dan | 64.9 | 0.784 |
| Tatoeba-test.swe-deu.swe.deu | 47.0 | 0.657 |
| Tatoeba-test.swe-eng.swe.eng | 55.8 | 0.700 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.060 |
| Tatoeba-test.swe-fry.swe.fry | 14.1 | 0.449 |
| Tatoeba-test.swe-gos.swe.gos | 7.5 | 0.291 |
| Tatoeba-test.swe-isl.swe.isl | 70.7 | 0.812 |
| Tatoeba-test.swe-ltz.swe.ltz | 15.9 | 0.553 |
| Tatoeba-test.swe-nld.swe.nld | 78.7 | 0.854 |
| Tatoeba-test.swe-nor.swe.nor | 67.1 | 0.799 |
| Tatoeba-test.swe-yid.swe.yid | 14.7 | 0.156 |
| Tatoeba-test.swg-dan.swg.dan | 7.7 | 0.341 |
| Tatoeba-test.swg-deu.swg.deu | 8.0 | 0.334 |
| Tatoeba-test.swg-eng.swg.eng | 12.4 | 0.305 |
| Tatoeba-test.swg-nds.swg.nds | 1.1 | 0.209 |
| Tatoeba-test.swg-nld.swg.nld | 4.9 | 0.244 |
| Tatoeba-test.swg-yid.swg.yid | 3.4 | 0.194 |
| Tatoeba-test.yid-afr.yid.afr | 23.6 | 0.552 |
| Tatoeba-test.yid-ang.yid.ang | 0.1 | 0.066 |
| Tatoeba-test.yid-dan.yid.dan | 17.5 | 0.392 |
| Tatoeba-test.yid-deu.yid.deu | 21.0 | 0.423 |
| Tatoeba-test.yid-eng.yid.eng | 17.4 | 0.368 |
| Tatoeba-test.yid-enm.yid.enm | 0.6 | 0.143 |
| Tatoeba-test.yid-fry.yid.fry | 5.3 | 0.169 |
| Tatoeba-test.yid-gos.yid.gos | 1.2 | 0.149 |
| Tatoeba-test.yid-ltz.yid.ltz | 3.5 | 0.256 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.487 |
| Tatoeba-test.yid-nld.yid.nld | 26.1 | 0.423 |
| Tatoeba-test.yid-nor.yid.nor | 47.1 | 0.583 |
| Tatoeba-test.yid-stq.yid.stq | 1.5 | 0.092 |
| Tatoeba-test.yid-swe.yid.swe | 35.9 | 0.518 |
| Tatoeba-test.yid-swg.yid.swg | 1.0 | 0.124 |
### System Info:
- hf_name: gem-gem
- source_languages: gem
- target_languages: gem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt
- src_alpha3: gem
- tgt_alpha3: gem
- short_pair: gem-gem
- chrF2_score: 0.614
- bleu: 42.7
- brevity_penalty: 0.993
- ref_len: 73459.0
- src_name: Germanic languages
- tgt_name: Germanic languages
- train_date: 2020-07-27
- src_alpha2: gem
- tgt_alpha2: gem
- prefer_old: False
- long_pair: gem-gem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gil-en | 2021-01-18T08:52:07.000Z | [
"pytorch",
"marian",
"seq2seq",
"gil",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 53 | transformers | ---
tags:
- translation
---
### opus-mt-gil-en
* source languages: gil
* target languages: en
* OPUS readme: [gil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.en | 36.0 | 0.522 |
|
Helsinki-NLP/opus-mt-gil-es | 2021-01-18T08:52:14.000Z | [
"pytorch",
"marian",
"seq2seq",
"gil",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 42 | transformers | ---
tags:
- translation
---
### opus-mt-gil-es
* source languages: gil
* target languages: es
* OPUS readme: [gil-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.es | 21.8 | 0.398 |
|
Helsinki-NLP/opus-mt-gil-fi | 2021-01-18T08:52:19.000Z | [
"pytorch",
"marian",
"seq2seq",
"gil",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 36 | transformers | ---
tags:
- translation
---
### opus-mt-gil-fi
* source languages: gil
* target languages: fi
* OPUS readme: [gil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fi | 23.1 | 0.447 |
|
Helsinki-NLP/opus-mt-gil-fr | 2021-01-18T08:52:25.000Z | [
"pytorch",
"marian",
"seq2seq",
"gil",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 44 | transformers | ---
tags:
- translation
---
### opus-mt-gil-fr
* source languages: gil
* target languages: fr
* OPUS readme: [gil-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fr | 24.9 | 0.424 |
|
Helsinki-NLP/opus-mt-gil-sv | 2021-01-18T08:52:31.000Z | [
"pytorch",
"marian",
"seq2seq",
"gil",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 53 | transformers | ---
tags:
- translation
---
### opus-mt-gil-sv
* source languages: gil
* target languages: sv
* OPUS readme: [gil-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.sv | 25.8 | 0.441 |
|
Helsinki-NLP/opus-mt-gl-en | 2021-01-18T08:52:37.000Z | [
"pytorch",
"marian",
"seq2seq",
"gl",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 196 | transformers | ---
language:
- gl
- en
tags:
- translation
license: apache-2.0
---
### glg-eng
* source group: Galician
* target group: English
* OPUS readme: [glg-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.eng | 44.4 | 0.628 |
### System Info:
- hf_name: glg-eng
- source_languages: glg
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'en']
- src_constituents: {'glg'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: eng
- short_pair: gl-en
- chrF2_score: 0.628
- bleu: 44.4
- brevity_penalty: 0.975
- ref_len: 8365.0
- src_name: Galician
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: en
- prefer_old: False
- long_pair: glg-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gl-es | 2021-01-18T08:52:42.000Z | [
"pytorch",
"marian",
"seq2seq",
"gl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 75 | transformers | ---
language:
- gl
- es
tags:
- translation
license: apache-2.0
---
### glg-spa
* source group: Galician
* target group: Spanish
* OPUS readme: [glg-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.spa | 72.2 | 0.836 |
### System Info:
- hf_name: glg-spa
- source_languages: glg
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'es']
- src_constituents: {'glg'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: spa
- short_pair: gl-es
- chrF2_score: 0.836
- bleu: 72.2
- brevity_penalty: 0.982
- ref_len: 17443.0
- src_name: Galician
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: es
- prefer_old: False
- long_pair: glg-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gl-pt | 2021-01-18T08:52:45.000Z | [
"pytorch",
"marian",
"seq2seq",
"gl",
"pt",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 29 | transformers | ---
language:
- gl
- pt
tags:
- translation
license: apache-2.0
---
### glg-por
* source group: Galician
* target group: Portuguese
* OPUS readme: [glg-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.por | 57.9 | 0.758 |
### System Info:
- hf_name: glg-por
- source_languages: glg
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'pt']
- src_constituents: {'glg'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: por
- short_pair: gl-pt
- chrF2_score: 0.758
- bleu: 57.9
- brevity_penalty: 0.977
- ref_len: 3078.0
- src_name: Galician
- tgt_name: Portuguese
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: pt
- prefer_old: False
- long_pair: glg-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gmq-en | 2021-01-18T08:52:51.000Z | [
"pytorch",
"marian",
"seq2seq",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 44 | transformers | ---
language:
- da
- nb
- sv
- is
- nn
- fo
- gmq
- en
tags:
- translation
license: apache-2.0
---
### gmq-eng
* source group: North Germanic languages
* target group: English
* OPUS readme: [gmq-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md)
* model: transformer
* source language(s): dan fao isl nno nob nob_Hebr non_Latn swe
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip)
* test set translations: [opus2m-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt)
* test set scores: [opus2m-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.multi.eng | 58.1 | 0.720 |
### System Info:
- hf_name: gmq-eng
- source_languages: gmq
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']
- src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt
- src_alpha3: gmq
- tgt_alpha3: eng
- short_pair: gmq-en
- chrF2_score: 0.72
- bleu: 58.1
- brevity_penalty: 0.982
- ref_len: 72641.0
- src_name: North Germanic languages
- tgt_name: English
- train_date: 2020-07-26
- src_alpha2: gmq
- tgt_alpha2: en
- prefer_old: False
- long_pair: gmq-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gmq-gmq | 2021-01-18T08:52:55.000Z | [
"pytorch",
"marian",
"seq2seq",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 39 | transformers | ---
language:
- da
- nb
- sv
- is
- nn
- fo
- gmq
tags:
- translation
license: apache-2.0
---
### gmq-gmq
* source group: North Germanic languages
* target group: North Germanic languages
* OPUS readme: [gmq-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md)
* model: transformer
* source language(s): dan fao isl nno nob swe
* target language(s): dan fao isl nno nob swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan-fao.dan.fao | 8.1 | 0.173 |
| Tatoeba-test.dan-isl.dan.isl | 52.5 | 0.827 |
| Tatoeba-test.dan-nor.dan.nor | 62.8 | 0.772 |
| Tatoeba-test.dan-swe.dan.swe | 67.6 | 0.802 |
| Tatoeba-test.fao-dan.fao.dan | 11.3 | 0.306 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.359 |
| Tatoeba-test.fao-nor.fao.nor | 36.8 | 0.531 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 0.632 |
| Tatoeba-test.isl-dan.isl.dan | 67.0 | 0.739 |
| Tatoeba-test.isl-fao.isl.fao | 14.5 | 0.243 |
| Tatoeba-test.isl-nor.isl.nor | 51.8 | 0.674 |
| Tatoeba-test.isl-swe.isl.swe | 100.0 | 1.000 |
| Tatoeba-test.multi.multi | 64.7 | 0.782 |
| Tatoeba-test.nor-dan.nor.dan | 65.6 | 0.797 |
| Tatoeba-test.nor-fao.nor.fao | 9.4 | 0.362 |
| Tatoeba-test.nor-isl.nor.isl | 38.8 | 0.587 |
| Tatoeba-test.nor-nor.nor.nor | 51.9 | 0.721 |
| Tatoeba-test.nor-swe.nor.swe | 66.5 | 0.789 |
| Tatoeba-test.swe-dan.swe.dan | 67.6 | 0.802 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.268 |
| Tatoeba-test.swe-isl.swe.isl | 65.8 | 0.914 |
| Tatoeba-test.swe-nor.swe.nor | 60.6 | 0.755 |
### System Info:
- hf_name: gmq-gmq
- source_languages: gmq
- target_languages: gmq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
- src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt
- src_alpha3: gmq
- tgt_alpha3: gmq
- short_pair: gmq-gmq
- chrF2_score: 0.782
- bleu: 64.7
- brevity_penalty: 0.9940000000000001
- ref_len: 49385.0
- src_name: North Germanic languages
- tgt_name: North Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmq
- tgt_alpha2: gmq
- prefer_old: False
- long_pair: gmq-gmq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gmw-en | 2021-01-18T08:53:00.000Z | [
"pytorch",
"marian",
"seq2seq",
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 30 | transformers | ---
language:
- nl
- en
- lb
- af
- de
- fy
- yi
- gmw
tags:
- translation
license: apache-2.0
---
### gmw-eng
* source group: West Germanic languages
* target group: English
* OPUS readme: [gmw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.538 |
| news-test2008-deueng.deu.eng | 25.7 | 0.534 |
| newstest2009-deueng.deu.eng | 25.1 | 0.530 |
| newstest2010-deueng.deu.eng | 27.9 | 0.565 |
| newstest2011-deueng.deu.eng | 25.3 | 0.539 |
| newstest2012-deueng.deu.eng | 26.6 | 0.548 |
| newstest2013-deueng.deu.eng | 29.6 | 0.565 |
| newstest2014-deen-deueng.deu.eng | 30.2 | 0.571 |
| newstest2015-ende-deueng.deu.eng | 31.5 | 0.577 |
| newstest2016-ende-deueng.deu.eng | 36.7 | 0.622 |
| newstest2017-ende-deueng.deu.eng | 32.3 | 0.585 |
| newstest2018-ende-deueng.deu.eng | 39.9 | 0.638 |
| newstest2019-deen-deueng.deu.eng | 35.9 | 0.611 |
| Tatoeba-test.afr-eng.afr.eng | 61.8 | 0.750 |
| Tatoeba-test.ang-eng.ang.eng | 7.3 | 0.220 |
| Tatoeba-test.deu-eng.deu.eng | 48.3 | 0.657 |
| Tatoeba-test.enm-eng.enm.eng | 16.1 | 0.423 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.168 |
| Tatoeba-test.fry-eng.fry.eng | 28.6 | 0.488 |
| Tatoeba-test.gos-eng.gos.eng | 15.5 | 0.326 |
| Tatoeba-test.gsw-eng.gsw.eng | 12.7 | 0.308 |
| Tatoeba-test.ksh-eng.ksh.eng | 8.4 | 0.254 |
| Tatoeba-test.ltz-eng.ltz.eng | 28.7 | 0.453 |
| Tatoeba-test.multi.eng | 48.5 | 0.646 |
| Tatoeba-test.nds-eng.nds.eng | 31.4 | 0.509 |
| Tatoeba-test.nld-eng.nld.eng | 58.1 | 0.728 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.1 | 0.406 |
| Tatoeba-test.sco-eng.sco.eng | 40.8 | 0.570 |
| Tatoeba-test.stq-eng.stq.eng | 20.3 | 0.380 |
| Tatoeba-test.swg-eng.swg.eng | 20.5 | 0.315 |
| Tatoeba-test.yid-eng.yid.eng | 16.0 | 0.366 |
### System Info:
- hf_name: gmw-eng
- source_languages: gmw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gmw
- tgt_alpha3: eng
- short_pair: gmw-en
- chrF2_score: 0.6459999999999999
- bleu: 48.5
- brevity_penalty: 0.997
- ref_len: 72584.0
- src_name: West Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gmw
- tgt_alpha2: en
- prefer_old: False
- long_pair: gmw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gmw-gmw | 2021-01-18T08:53:04.000Z | [
"pytorch",
"marian",
"seq2seq",
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 45 | transformers | ---
language:
- nl
- en
- lb
- af
- de
- fy
- yi
- gmw
tags:
- translation
license: apache-2.0
---
### gmw-gmw
* source group: West Germanic languages
* target group: West Germanic languages
* OPUS readme: [gmw-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md)
* model: transformer
* source language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 25.3 | 0.527 |
| newssyscomb2009-engdeu.eng.deu | 19.0 | 0.502 |
| news-test2008-deueng.deu.eng | 23.7 | 0.515 |
| news-test2008-engdeu.eng.deu | 19.2 | 0.491 |
| newstest2009-deueng.deu.eng | 23.1 | 0.514 |
| newstest2009-engdeu.eng.deu | 18.6 | 0.495 |
| newstest2010-deueng.deu.eng | 25.8 | 0.545 |
| newstest2010-engdeu.eng.deu | 20.3 | 0.505 |
| newstest2011-deueng.deu.eng | 23.7 | 0.523 |
| newstest2011-engdeu.eng.deu | 18.9 | 0.490 |
| newstest2012-deueng.deu.eng | 24.4 | 0.529 |
| newstest2012-engdeu.eng.deu | 19.2 | 0.489 |
| newstest2013-deueng.deu.eng | 27.2 | 0.545 |
| newstest2013-engdeu.eng.deu | 22.4 | 0.514 |
| newstest2014-deen-deueng.deu.eng | 27.0 | 0.546 |
| newstest2015-ende-deueng.deu.eng | 28.4 | 0.552 |
| newstest2015-ende-engdeu.eng.deu | 25.3 | 0.541 |
| newstest2016-ende-deueng.deu.eng | 33.2 | 0.595 |
| newstest2016-ende-engdeu.eng.deu | 29.8 | 0.578 |
| newstest2017-ende-deueng.deu.eng | 29.0 | 0.557 |
| newstest2017-ende-engdeu.eng.deu | 23.9 | 0.534 |
| newstest2018-ende-deueng.deu.eng | 35.9 | 0.607 |
| newstest2018-ende-engdeu.eng.deu | 34.8 | 0.609 |
| newstest2019-deen-deueng.deu.eng | 32.1 | 0.579 |
| newstest2019-ende-engdeu.eng.deu | 31.0 | 0.579 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.065 |
| Tatoeba-test.afr-deu.afr.deu | 46.8 | 0.668 |
| Tatoeba-test.afr-eng.afr.eng | 58.5 | 0.728 |
| Tatoeba-test.afr-enm.afr.enm | 13.4 | 0.357 |
| Tatoeba-test.afr-fry.afr.fry | 5.3 | 0.026 |
| Tatoeba-test.afr-gos.afr.gos | 3.5 | 0.228 |
| Tatoeba-test.afr-ltz.afr.ltz | 1.6 | 0.131 |
| Tatoeba-test.afr-nld.afr.nld | 55.4 | 0.715 |
| Tatoeba-test.afr-yid.afr.yid | 3.4 | 0.008 |
| Tatoeba-test.ang-afr.ang.afr | 3.1 | 0.096 |
| Tatoeba-test.ang-deu.ang.deu | 2.6 | 0.188 |
| Tatoeba-test.ang-eng.ang.eng | 5.4 | 0.211 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.197 |
| Tatoeba-test.ang-gos.ang.gos | 6.6 | 0.186 |
| Tatoeba-test.ang-ltz.ang.ltz | 5.3 | 0.072 |
| Tatoeba-test.ang-yid.ang.yid | 0.9 | 0.131 |
| Tatoeba-test.deu-afr.deu.afr | 52.7 | 0.699 |
| Tatoeba-test.deu-ang.deu.ang | 0.8 | 0.133 |
| Tatoeba-test.deu-eng.deu.eng | 43.5 | 0.621 |
| Tatoeba-test.deu-enm.deu.enm | 6.9 | 0.245 |
| Tatoeba-test.deu-frr.deu.frr | 0.8 | 0.200 |
| Tatoeba-test.deu-fry.deu.fry | 15.1 | 0.367 |
| Tatoeba-test.deu-gos.deu.gos | 2.2 | 0.279 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.0 | 0.176 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.6 | 0.208 |
| Tatoeba-test.deu-ltz.deu.ltz | 12.1 | 0.274 |
| Tatoeba-test.deu-nds.deu.nds | 18.8 | 0.446 |
| Tatoeba-test.deu-nld.deu.nld | 48.6 | 0.669 |
| Tatoeba-test.deu-pdc.deu.pdc | 4.6 | 0.198 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.340 |
| Tatoeba-test.deu-stq.deu.stq | 3.2 | 0.240 |
| Tatoeba-test.deu-swg.deu.swg | 0.5 | 0.179 |
| Tatoeba-test.deu-yid.deu.yid | 1.7 | 0.160 |
| Tatoeba-test.eng-afr.eng.afr | 55.8 | 0.730 |
| Tatoeba-test.eng-ang.eng.ang | 5.7 | 0.157 |
| Tatoeba-test.eng-deu.eng.deu | 36.7 | 0.584 |
| Tatoeba-test.eng-enm.eng.enm | 2.0 | 0.272 |
| Tatoeba-test.eng-frr.eng.frr | 6.1 | 0.246 |
| Tatoeba-test.eng-fry.eng.fry | 15.3 | 0.378 |
| Tatoeba-test.eng-gos.eng.gos | 1.2 | 0.242 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.164 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.9 | 0.170 |
| Tatoeba-test.eng-ltz.eng.ltz | 13.7 | 0.263 |
| Tatoeba-test.eng-nds.eng.nds | 17.1 | 0.410 |
| Tatoeba-test.eng-nld.eng.nld | 49.6 | 0.673 |
| Tatoeba-test.eng-pdc.eng.pdc | 5.1 | 0.218 |
| Tatoeba-test.eng-sco.eng.sco | 34.8 | 0.587 |
| Tatoeba-test.eng-stq.eng.stq | 2.1 | 0.322 |
| Tatoeba-test.eng-swg.eng.swg | 1.7 | 0.192 |
| Tatoeba-test.eng-yid.eng.yid | 1.7 | 0.173 |
| Tatoeba-test.enm-afr.enm.afr | 13.4 | 0.397 |
| Tatoeba-test.enm-ang.enm.ang | 0.7 | 0.063 |
| Tatoeba-test.enm-deu.enm.deu | 41.5 | 0.514 |
| Tatoeba-test.enm-eng.enm.eng | 21.3 | 0.483 |
| Tatoeba-test.enm-fry.enm.fry | 0.0 | 0.058 |
| Tatoeba-test.enm-gos.enm.gos | 10.7 | 0.354 |
| Tatoeba-test.enm-ksh.enm.ksh | 7.0 | 0.161 |
| Tatoeba-test.enm-nds.enm.nds | 18.6 | 0.316 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.524 |
| Tatoeba-test.enm-yid.enm.yid | 0.7 | 0.128 |
| Tatoeba-test.frr-deu.frr.deu | 4.1 | 0.219 |
| Tatoeba-test.frr-eng.frr.eng | 14.1 | 0.186 |
| Tatoeba-test.frr-fry.frr.fry | 3.1 | 0.129 |
| Tatoeba-test.frr-gos.frr.gos | 3.6 | 0.226 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.145 |
| Tatoeba-test.frr-nld.frr.nld | 9.8 | 0.209 |
| Tatoeba-test.frr-stq.frr.stq | 2.8 | 0.142 |
| Tatoeba-test.fry-afr.fry.afr | 0.0 | 1.000 |
| Tatoeba-test.fry-deu.fry.deu | 30.1 | 0.535 |
| Tatoeba-test.fry-eng.fry.eng | 28.0 | 0.486 |
| Tatoeba-test.fry-enm.fry.enm | 16.0 | 0.262 |
| Tatoeba-test.fry-frr.fry.frr | 5.5 | 0.160 |
| Tatoeba-test.fry-gos.fry.gos | 1.6 | 0.307 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.438 |
| Tatoeba-test.fry-nds.fry.nds | 8.1 | 0.083 |
| Tatoeba-test.fry-nld.fry.nld | 41.4 | 0.616 |
| Tatoeba-test.fry-stq.fry.stq | 1.6 | 0.217 |
| Tatoeba-test.fry-yid.fry.yid | 1.6 | 0.159 |
| Tatoeba-test.gos-afr.gos.afr | 6.3 | 0.318 |
| Tatoeba-test.gos-ang.gos.ang | 6.2 | 0.058 |
| Tatoeba-test.gos-deu.gos.deu | 11.7 | 0.363 |
| Tatoeba-test.gos-eng.gos.eng | 14.9 | 0.322 |
| Tatoeba-test.gos-enm.gos.enm | 9.1 | 0.398 |
| Tatoeba-test.gos-frr.gos.frr | 3.3 | 0.117 |
| Tatoeba-test.gos-fry.gos.fry | 13.1 | 0.387 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.1 | 0.154 |
| Tatoeba-test.gos-nds.gos.nds | 2.4 | 0.206 |
| Tatoeba-test.gos-nld.gos.nld | 13.9 | 0.395 |
| Tatoeba-test.gos-stq.gos.stq | 2.1 | 0.209 |
| Tatoeba-test.gos-yid.gos.yid | 1.7 | 0.147 |
| Tatoeba-test.gsw-deu.gsw.deu | 10.5 | 0.350 |
| Tatoeba-test.gsw-eng.gsw.eng | 10.7 | 0.299 |
| Tatoeba-test.ksh-deu.ksh.deu | 12.0 | 0.373 |
| Tatoeba-test.ksh-eng.ksh.eng | 3.2 | 0.225 |
| Tatoeba-test.ksh-enm.ksh.enm | 13.4 | 0.308 |
| Tatoeba-test.ltz-afr.ltz.afr | 37.4 | 0.525 |
| Tatoeba-test.ltz-ang.ltz.ang | 2.8 | 0.036 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.3 | 0.596 |
| Tatoeba-test.ltz-eng.ltz.eng | 31.7 | 0.490 |
| Tatoeba-test.ltz-fry.ltz.fry | 36.3 | 0.658 |
| Tatoeba-test.ltz-gos.ltz.gos | 2.9 | 0.209 |
| Tatoeba-test.ltz-nld.ltz.nld | 38.8 | 0.530 |
| Tatoeba-test.ltz-stq.ltz.stq | 5.8 | 0.165 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.0 | 0.159 |
| Tatoeba-test.multi.multi | 36.4 | 0.568 |
| Tatoeba-test.nds-deu.nds.deu | 35.0 | 0.573 |
| Tatoeba-test.nds-eng.nds.eng | 29.6 | 0.495 |
| Tatoeba-test.nds-enm.nds.enm | 3.7 | 0.194 |
| Tatoeba-test.nds-frr.nds.frr | 6.6 | 0.133 |
| Tatoeba-test.nds-fry.nds.fry | 4.2 | 0.087 |
| Tatoeba-test.nds-gos.nds.gos | 2.0 | 0.243 |
| Tatoeba-test.nds-nld.nds.nld | 41.4 | 0.618 |
| Tatoeba-test.nds-swg.nds.swg | 0.6 | 0.178 |
| Tatoeba-test.nds-yid.nds.yid | 8.3 | 0.238 |
| Tatoeba-test.nld-afr.nld.afr | 59.4 | 0.759 |
| Tatoeba-test.nld-deu.nld.deu | 49.9 | 0.685 |
| Tatoeba-test.nld-eng.nld.eng | 54.1 | 0.699 |
| Tatoeba-test.nld-enm.nld.enm | 5.0 | 0.250 |
| Tatoeba-test.nld-frr.nld.frr | 2.4 | 0.224 |
| Tatoeba-test.nld-fry.nld.fry | 19.4 | 0.446 |
| Tatoeba-test.nld-gos.nld.gos | 2.5 | 0.273 |
| Tatoeba-test.nld-ltz.nld.ltz | 13.8 | 0.292 |
| Tatoeba-test.nld-nds.nld.nds | 21.3 | 0.457 |
| Tatoeba-test.nld-sco.nld.sco | 14.7 | 0.423 |
| Tatoeba-test.nld-stq.nld.stq | 1.9 | 0.257 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.162 |
| Tatoeba-test.nld-yid.nld.yid | 2.6 | 0.186 |
| Tatoeba-test.pdc-deu.pdc.deu | 39.7 | 0.529 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.0 | 0.427 |
| Tatoeba-test.sco-deu.sco.deu | 28.4 | 0.428 |
| Tatoeba-test.sco-eng.sco.eng | 41.8 | 0.595 |
| Tatoeba-test.sco-nld.sco.nld | 36.4 | 0.565 |
| Tatoeba-test.stq-deu.stq.deu | 7.7 | 0.328 |
| Tatoeba-test.stq-eng.stq.eng | 21.1 | 0.428 |
| Tatoeba-test.stq-frr.stq.frr | 2.0 | 0.118 |
| Tatoeba-test.stq-fry.stq.fry | 6.3 | 0.255 |
| Tatoeba-test.stq-gos.stq.gos | 1.4 | 0.244 |
| Tatoeba-test.stq-ltz.stq.ltz | 4.4 | 0.204 |
| Tatoeba-test.stq-nld.stq.nld | 10.7 | 0.371 |
| Tatoeba-test.stq-yid.stq.yid | 1.4 | 0.105 |
| Tatoeba-test.swg-deu.swg.deu | 9.5 | 0.343 |
| Tatoeba-test.swg-eng.swg.eng | 15.1 | 0.306 |
| Tatoeba-test.swg-nds.swg.nds | 0.7 | 0.196 |
| Tatoeba-test.swg-nld.swg.nld | 11.6 | 0.308 |
| Tatoeba-test.swg-yid.swg.yid | 0.9 | 0.186 |
| Tatoeba-test.yid-afr.yid.afr | 100.0 | 1.000 |
| Tatoeba-test.yid-ang.yid.ang | 0.6 | 0.079 |
| Tatoeba-test.yid-deu.yid.deu | 16.7 | 0.372 |
| Tatoeba-test.yid-eng.yid.eng | 15.8 | 0.344 |
| Tatoeba-test.yid-enm.yid.enm | 1.3 | 0.166 |
| Tatoeba-test.yid-fry.yid.fry | 5.6 | 0.157 |
| Tatoeba-test.yid-gos.yid.gos | 2.2 | 0.160 |
| Tatoeba-test.yid-ltz.yid.ltz | 2.1 | 0.238 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.365 |
| Tatoeba-test.yid-nld.yid.nld | 20.9 | 0.397 |
| Tatoeba-test.yid-stq.yid.stq | 3.7 | 0.165 |
| Tatoeba-test.yid-swg.yid.swg | 1.8 | 0.156 |
### System Info:
- hf_name: gmw-gmw
- source_languages: gmw
- target_languages: gmw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt
- src_alpha3: gmw
- tgt_alpha3: gmw
- short_pair: gmw-gmw
- chrF2_score: 0.568
- bleu: 36.4
- brevity_penalty: 1.0
- ref_len: 72534.0
- src_name: West Germanic languages
- tgt_name: West Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmw
- tgt_alpha2: gmw
- prefer_old: False
- long_pair: gmw-gmw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-grk-en | 2021-01-18T08:53:09.000Z | [
"pytorch",
"marian",
"seq2seq",
"el",
"grk",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 116 | transformers | ---
language:
- el
- grk
- en
tags:
- translation
license: apache-2.0
---
### grk-eng
* source group: Greek languages
* target group: English
* OPUS readme: [grk-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/grk-eng/README.md)
* model: transformer
* source language(s): ell grc_Grek
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell-eng.ell.eng | 65.9 | 0.779 |
| Tatoeba-test.grc-eng.grc.eng | 4.1 | 0.187 |
| Tatoeba-test.multi.eng | 60.9 | 0.733 |
### System Info:
- hf_name: grk-eng
- source_languages: grk
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/grk-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'grk', 'en']
- src_constituents: {'grc_Grek', 'ell'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.test.txt
- src_alpha3: grk
- tgt_alpha3: eng
- short_pair: grk-en
- chrF2_score: 0.733
- bleu: 60.9
- brevity_penalty: 0.973
- ref_len: 62205.0
- src_name: Greek languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: grk
- tgt_alpha2: en
- prefer_old: False
- long_pair: grk-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-guw-de | 2021-01-18T08:53:13.000Z | [
"pytorch",
"marian",
"seq2seq",
"guw",
"de",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-guw-de
* source languages: guw
* target languages: de
* OPUS readme: [guw-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.de | 22.7 | 0.434 |
|
Helsinki-NLP/opus-mt-guw-en | 2021-01-18T08:53:18.000Z | [
"pytorch",
"marian",
"seq2seq",
"guw",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 55 | transformers | ---
tags:
- translation
---
### opus-mt-guw-en
* source languages: guw
* target languages: en
* OPUS readme: [guw-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.en | 44.8 | 0.601 |
|
Helsinki-NLP/opus-mt-guw-es | 2021-01-18T08:53:23.000Z | [
"pytorch",
"marian",
"seq2seq",
"guw",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 61 | transformers | ---
tags:
- translation
---
### opus-mt-guw-es
* source languages: guw
* target languages: es
* OPUS readme: [guw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.es | 27.2 | 0.457 |
|
Helsinki-NLP/opus-mt-guw-fi | 2021-01-18T08:53:29.000Z | [
"pytorch",
"marian",
"seq2seq",
"guw",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-guw-fi
* source languages: guw
* target languages: fi
* OPUS readme: [guw-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.fi | 27.7 | 0.512 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.