modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
Jitin/manglish | 2021-05-20T11:57:45.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| Jitin | 13 | transformers | |
Jitin/romanized-malayalam | 2021-05-20T11:58:42.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json",
"Jitin/romanized_1M/config.json",
"Jitin/romanized_1M/merges.txt",
"Jitin/romanized_1M/pytorch_model.bin",
"Jitin/romanized_1M/special_tokens_map.json",
"Jitin/romanized_1M/tokenizer_config.json",
"Jitin/romanized_1M/training_args.bin",
"Jitin/romanized_1M/vocab.json",
"drive/My Drive/Colab Notebooks/malayalam/models/romanized_1M/config.json",
"drive/My Drive/Colab Notebooks/malayalam/models/romanized_1M/pytorch_model.bin",
"drive/My Drive/Colab Notebooks/malayalam/models/romanized_1M/training_args.bin"
]
| Jitin | 20 | transformers | |
Jllama/dialoGPT-small-Joshua-test | 2021-06-02T06:46:07.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| Jllama | 22 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
Jodsa/camembert_clf | 2021-05-18T14:29:37.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| Jodsa | 16 | transformers | |
Jodsa/camembert_mlm | 2021-05-17T13:06:25.000Z | [
"pytorch",
"camembert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| Jodsa | 6 | transformers | |
Johnnil/model_name | 2021-04-07T08:53:13.000Z | []
| [
".gitattributes",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt",
"results/pytorch_model.bin",
"results/training_args.bin"
]
| Johnnil | 0 | |||
Johnnil/prestoBERT | 2021-04-13T19:53:50.000Z | []
| [
".gitattributes",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt",
"results/pytorch_model.bin",
"results/training_args.bin"
]
| Johnnil | 0 | |||
Jon/model_name | 2021-02-05T17:31:03.000Z | []
| [
".gitattributes"
]
| Jon | 0 | |||
Jon/testRetailModel | 2021-02-05T17:37:30.000Z | []
| [
".gitattributes"
]
| Jon | 0 | |||
JonathanCmitchell/model_name | 2021-01-24T07:53:30.000Z | []
| [
".gitattributes"
]
| JonathanCmitchell | 0 | |||
JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k | 2021-01-21T21:04:06.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/ConvTasNet_Libri2Mix_sepclean_16k | 2021-01-21T21:09:50.000Z | [
"pytorch",
"dataset:Libri2Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 15.243671356901526
si_sdr_imp: 15.243034178473609
sdr: 15.668108919568112
sdr_imp: 15.578229918028036
sir: 25.295100756629957
sir_imp: 25.205219921301754
sar: 16.307682590197313
sar_imp: -51.64989963759405
stoi: 0.9394951175291422
stoi_imp: 0.22640192740016568
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
JorisCos/ConvTasNet_Libri2Mix_sepclean_8k | 2021-01-21T21:04:36.000Z | [
"pytorch",
"dataset:Libri2Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_8k`
Imported from [Zenodo](https://zenodo.org/record/3873572#.X9M69cLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 2
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 14.764543634468069
si_sdr_imp: 14.764029375607246
sdr: 15.29337970745095
sdr_imp: 15.114146605113111
sir: 24.092904661115366
sir_imp: 23.913669683141528
sar: 16.06055906916849
sar_imp: -51.980784441287454
stoi: 0.9311142440593033
stoi_imp: 0.21817376142710482
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k | 2021-01-21T21:10:09.000Z | [
"pytorch",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri2Mix
- sep_noisy
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 2
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 10.617130949793383
si_sdr_imp: 12.551811412989263
sdr: 11.231867464482065
sdr_imp: 13.059765009747343
sir: 24.461138352988346
sir_imp: 24.371856452307703
sar: 11.5649982725426
sar_imp: 4.662525705768228
stoi: 0.8701085138712695
stoi_imp: 0.2245418019822898
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k | 2021-01-21T21:07:13.000Z | [
"pytorch",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri2Mix
- sep_noisy
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k`
Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 9.944424856077259
si_sdr_imp: 11.939395359731192
sdr: 10.701526190782072
sdr_imp: 12.481757547845662
sir: 22.633644975545575
sir_imp: 22.45666740833025
sar: 11.131644100944868
sar_imp: 4.248489589311784
stoi: 0.852048619949357
stoi_imp: 0.2071994899565506
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/ConvTasNet_Libri3Mix_sepclean_16k | 2021-01-21T21:10:25.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yaml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.932601610824145
si_sdr_imp: 12.299341066588594
sdr: 9.557260814240447
sdr_imp: 12.76957128385349
sir: 17.387646884037455
sir_imp: 20.599955591768484
sar: 10.686885056960504
sar_imp: -55.8894643263213
stoi: 0.8481258332025354
stoi_imp: 0.25528367853750356
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
JorisCos/ConvTasNet_Libri3Mix_sepclean_8k | 2021-01-21T21:07:34.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.581797049575108
si_sdr_imp: 11.977037288467368
sdr' 9.305885208641385
sdr_imp: 12.3943409734845
sir: 16.42030534048559
sir_imp: 19.508759460400984
sar: 10.641943911079238
sar_imp: -56.4345187842095
stoi: 0.8365148408724333
stoi_imp: 0.24401766199806396
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k | 2021-01-21T21:10:42.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k | 2021-01-21T21:07:59.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"asteroid",
"audio",
"ConvTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-source-separation
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.978836560066222
si_sdr_imp: 10.388889689413096
sdr: 6.8651365291740225
sdr_imp: 10.928018056925016
sir: 14.997089638783114
sir_imp: 18.08248357801549
sar: 8.127504792061933
sar_imp: -0.7869320540959925
stoi: 0.7669414686111115
stoi_imp: 0.20416563213078837
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/DCCRNet_Libri1Mix_enhsingle_16k | 2021-02-23T15:39:25.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DCCRNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- DCCRNet
- audio-source-separation
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_kernel_size: 400
stft_n_filters: 512
stft_stride: 100
masknet:
architecture: DCCRN-CL
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 12
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.329767398333798
si_sdr_imp: 9.879986092474098
sdr: 13.87279932997016
sdr_imp: 10.370136530757103
sir: Infinity
sir_imp: NaN
sar: 13.87279932997016
sar_imp: 10.370136530757103
stoi: 0.9140907015623948
stoi_imp: 0.11817087802185405
```
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/DCUNet_Libri1Mix_enhsingle_16k | 2021-03-08T09:29:13.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DCUNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- DCUNet
- audio-source-separation
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/DCUNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_n_filters: 1024
stft_kernel_size: 1024
stft_stride: 256
masknet:
architecture: Large-DCUNet-20
fix_length_mode: pad
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.154035391645971
si_sdr_imp: 9.704254085786271
sdr: 13.568058873121435
sdr_imp: 10.065396073908367
sar: 13.568058873121435
sar_imp: 10.065396073908367
stoi: 0.9199373340235417
stoi_imp: 0.12401751048300132
```
License notice:
This work "DCUNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCUNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/DPRNNTasNet-ks2_Libri1Mix_enhsingle_16k | 2021-02-23T15:38:51.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DPRNNTasNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- DPRNNTasNet
- audio-source-separation
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 1
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
masknet:
bidirectional: true
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 1
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.7228101708889
si_sdr_imp: 11.2730288650292
sdr: 15.35661405197161
sdr_imp: 11.853951252758595
sir: Infinity
sir_imp: NaN
sar: 15.35661405197161
sar_imp: 11.853951252758595
stoi: 0.9300461826351578
stoi_imp: 0.13412635909461715
```
License notice:
This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/DPTNet_Libri1Mix_enhsingle_16k | 2021-02-23T15:39:43.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DPTNet",
"audio-source-separation",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| JorisCos | 0 | asteroid | ---
tags:
- asteroid
- audio
- DPTNet
- audio-source-separation
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `JorisCos/DPTNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.829670037349064
si_sdr_imp: 11.379888731489366
sdr: 15.395712644737149
sdr_imp: 11.893049845524112
sir: Infinity
sir_imp: NaN
sar: 15.395712644737149
sar_imp: 11.893049845524112
stoi: 0.9301948391058859
stoi_imp: 0.13427501556534832
```
License notice:
This work "DPTNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPTNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JosAbc123/Loken | 2020-12-13T04:44:53.000Z | []
| [
".gitattributes"
]
| JosAbc123 | 0 | |||
JoshObi94/GPT-Neo | 2021-04-09T08:14:23.000Z | []
| [
".gitattributes"
]
| JoshObi94 | 0 | |||
JovenPai/bert_cn_finetunning | 2021-05-18T21:15:39.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| JovenPai | 14 | transformers | |
JovenPai/bert_finetunning_test | 2021-05-18T21:16:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| JovenPai | 16 | transformers | |
JuliusAlphonso/dear-jarvis-monolith-xed-en | 2021-06-17T12:46:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| JuliusAlphonso | 27 | transformers | |
Jung/t5-base | 2020-08-09T08:04:13.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin"
]
| Jung | 17 | transformers | |
Jung/t5-large-finetuned | 2020-08-31T06:02:42.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| Jung | 12 | transformers | |
Jung/t5-large | 2020-08-17T03:12:49.000Z | [
"pytorch",
"t5",
"lm-head",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"dataset-metadata.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| Jung | 53 | transformers | |
JunzheJosephZhu/MultiDecoderDPRNN | 2021-06-07T15:59:13.000Z | [
"dataset:Wsj0MixVar",
"dataset:sep_clean",
"arxiv:2011.12022",
"asteroid",
"audio",
"MultiDecoderDPRNN"
]
| [
".gitattributes",
"README.md"
]
| JunzheJosephZhu | 0 | asteroid | ---
tags:
- asteroid
- audio
- MultiDecoderDPRNN
datasets:
- Wsj0MixVar
- sep_clean
inference: false
---
## Asteroid model
## Description:
Refer to paper "Multi-Decoder DPRNN: High Accuracy Source Counting and Separation",
Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson. https://arxiv.org/abs/2011.12022
Demo Page: https://junzhejosephzhu.github.io/Multi-Decoder-DPRNN/
Original research repo is at https://github.com/JunzheJosephZhu/MultiDecoder-DPRNN
This model was trained by Joseph Zhu using the wsj0-mix-var/Multi-Decoder-DPRNN recipe in Asteroid.
It was trained on the `sep_clean` task of the Wsj0MixVar dataset.
## Training config:
```yaml
filterbank:
n_filters: 64
kernel_size: 8
stride: 4
masknet:
n_srcs: [2, 3, 4, 5]
bn_chan: 128
hid_size: 128
chunk_size: 128
hop_size: 64
n_repeats: 8
mask_act: 'sigmoid'
bidirectional: true
dropout: 0
use_mulcat: false
training:
epochs: 200
batch_size: 2
num_workers: 2
half_lr: yes
lr_decay: yes
early_stop: yes
gradient_clipping: 5
optim:
optimizer: adam
lr: 0.001
weight_decay: 0.00000
data:
train_dir: "data/{}speakers/wav8k/min/tr"
valid_dir: "data/{}speakers/wav8k/min/cv"
task: sep_clean
sample_rate: 8000
seglen: 4.0
minlen: 2.0
loss:
lambda: 0.05
```
## Results:
```yaml
tmux attach -t 2
```
|
|
Jzz/FIDIC_BERT--0.1 | 2021-05-25T02:46:36.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"rng_state.pth",
"scheduler.pt",
"trainer_state.json",
"training_args.bin"
]
| Jzz | 14 | transformers | |
KB/albert-base-swedish-cased-alpha | 2020-12-11T21:29:07.000Z | [
"pytorch",
"albert",
"sv",
"transformers"
]
| [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| KB | 102 | transformers | ---
language: sv
---
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KB/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KB/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KB/bert-base-swedish-cased-ner', tokenizer='KB/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formatted):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KB/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KB/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
|
|
KB/bert-base-swedish-cased-alpha | 2021-05-18T21:17:41.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| KB | 87 | transformers | |
KB/bert-base-swedish-cased-ner | 2021-05-18T21:18:54.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"sv",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| KB | 1,576 | transformers | ---
language: sv
---
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KB/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KB/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KB/bert-base-swedish-cased-ner', tokenizer='KB/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formatted):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KB/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KB/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
|
KB/bert-base-swedish-cased-neriob | 2021-05-18T21:20:00.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| KB | 29 | transformers | |
KB/bert-base-swedish-cased-pos | 2021-05-18T21:20:59.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| KB | 30 | transformers | |
KB/bert-base-swedish-cased-squad-experimental | 2021-05-18T21:21:57.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"nbest_predictions_.json",
"null_odds_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| KB | 198 | transformers | |
KB/bert-base-swedish-cased | 2021-05-18T21:23:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"sv",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| KB | 21,016 | transformers | ---
language: sv
---
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on aproximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KB/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KB/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KB/bert-base-swedish-cased-ner', tokenizer='KB/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formated):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easisest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KB/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KB/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
|
KB/electra-base-swedish-cased-discriminator | 2021-01-20T13:15:09.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| KB | 76 | transformers | ||
KB/electra-base-swedish-cased-generator | 2021-01-20T13:17:06.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| KB | 13 | transformers | |
KB/electra-small-swedish-cased-discriminator | 2020-10-21T08:17:53.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| KB | 16 | transformers | ||
KB/electra-small-swedish-cased-generator | 2020-10-21T08:17:40.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| KB | 59 | transformers | |
KBLab/wav2vec2-base-voxpopuli-sv-swedish | 2021-05-07T07:25:31.000Z | [
"pytorch",
"wav2vec2",
"sv-SE",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"voxpopuli",
"license:cc-by-nc-4.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| KBLab | 12 | transformers | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
#- cer
tags:
- audio
- automatic-speech-recognition
- speech
- voxpopuli
license: cc-by-nc-4.0
model-index:
- name: Wav2vec 2.0 base VoxPopuli-sv swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: NST Swedish ASR Database
metrics:
- name: Test WER
type: wer
value: 5.619804368919309
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 19.145252414798616
---
# Wav2vec 2.0 base-voxpopuli-sv-swedish
Finetuned version of Facebooks [VoxPopuli-sv base](https://huggingface.co/facebook/wav2vec2-base-sv-voxpopuli) model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **5.62%**, WER for Common Voice test set is **19.15%**.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
KBLab/wav2vec2-large-voxpopuli-sv-swedish | 2021-05-23T16:27:39.000Z | [
"pytorch",
"wav2vec2",
"sv-SE",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"voxpopuli",
"license:cc-by-nc-4.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| KBLab | 354 | transformers | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- voxpopuli
license: cc-by-nc-4.0
model-index:
- name: Wav2vec 2.0 large VoxPopuli-sv swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 10.994764
- name: Test CER
type: cer
value: 3.946846
---
# Wav2vec 2.0 large-voxpopuli-sv-swedish
Additionally pretrained and finetuned version of Facebooks [VoxPopuli-sv large](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **3.95%**. WER for Common Voice test set is **10.99%** directly and **7.82%** with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Training
This model has additionally pretrained on 1000h of Swedish local radio broadcasts, fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
KBLab/wav2vec2-large-xlsr-53-swedish | 2021-04-13T09:07:25.000Z | [
"pytorch",
"wav2vec2",
"sv-SE",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| KBLab | 223 | transformers | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Swedish by KBLab
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 14.298610
- name: Test CER
type: cer
value: 4.925294
---
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model.to("cuda")
chars_to_ignore_regex = '[,?.!\\-;:"“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**WER**: 14.298610%
**CER**: 4.925294%
## Training
First the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/) was used for fine tuning as well as [Common Voice](https://commonvoice.mozilla.org/en/datasets). Lastly only Common Voice dataset was used for final finetuning. The [Fairseq](https://github.com/fairseq) scripts were used.
|
KETI-AIR/ke-t5-base-ko | 2021-05-27T14:38:56.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 27 | transformers | |
KETI-AIR/ke-t5-base-newslike | 2021-05-27T14:39:09.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 13 | transformers | |
KETI-AIR/ke-t5-base | 2021-05-27T14:38:14.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 1,915 | transformers | |
KETI-AIR/ke-t5-large-ko | 2021-05-27T14:39:24.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 34 | transformers | |
KETI-AIR/ke-t5-large-newslike | 2021-05-27T14:39:39.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 194 | transformers | |
KETI-AIR/ke-t5-large | 2021-05-27T14:39:51.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 36 | transformers | |
KETI-AIR/ke-t5-small-ko | 2021-05-27T14:40:05.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 6 | transformers | |
KETI-AIR/ke-t5-small-newslike | 2021-05-27T14:40:18.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 11 | transformers | |
KETI-AIR/ke-t5-small | 2021-05-27T14:40:31.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| KETI-AIR | 1,225 | transformers | |
KK/DialoGPT-small-Rick | 2021-06-11T03:07:42.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| KK | 13 | transformers | |
KY/KY_test_model | 2021-06-15T08:08:44.000Z | [
"pytorch",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| KY | 7 | transformers | |
KY/modeling_test_II | 2021-06-17T02:18:08.000Z | [
"pytorch",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| KY | 0 | transformers | |
Kal/Damen | 2021-03-01T20:05:23.000Z | []
| [
".gitattributes",
"README.md"
]
| Kal | 0 | |||
Kalindu/SinBerto | 2021-06-17T16:37:19.000Z | [
"pytorch",
"roberta",
"masked-lm",
"si",
"arxiv:1907.11692",
"transformers",
"SinBERTo",
"Sinhala",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| Kalindu | 17 | transformers | |
Kap/test-model | 2021-06-12T13:13:41.000Z | []
| [
".gitattributes"
]
| Kap | 0 | |||
Kapil/model_name | 2021-02-19T06:02:17.000Z | []
| [
".gitattributes"
]
| Kapil | 0 | |||
Karen/test_model | 2021-02-10T22:33:44.000Z | []
| [
".gitattributes"
]
| Karen | 0 | |||
Karimfayed/pegasus_SAMSum | 2021-05-09T20:50:32.000Z | [
"pytorch",
"pegasus",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
]
| Karimfayed | 32 | transformers | |
Kerui/CS412-Project | 2021-04-20T01:18:26.000Z | []
| [
".gitattributes"
]
| Kerui | 0 | |||
Khu1998/clog-assessment-model | 2021-05-19T11:18:26.000Z | [
"tf",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| Khu1998 | 12 | transformers | # CLOG Assessment generator model
|
Khu1998/clog-clo-model | 2021-06-13T17:22:02.000Z | [
"pytorch",
"jax",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| Khu1998 | 8 | transformers | |
KoichiYasuoka/bert-large-japanese-char-extended | 2021-06-13T13:44:57.000Z | [
"pytorch",
"bert",
"masked-lm",
"ja",
"transformers",
"japanese",
"wikipedia",
"license:cc-by-sa-3.0",
"fill-mask",
"pipeline_tag:fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| KoichiYasuoka | 201 | transformers | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
- "wikipedia"
license: "cc-by-sa-3.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "酸素ボンベを充[MASK]する。"
---
# bert-large-japanese-char-extended
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-large-japanese-char](https://huggingface.co/cl-tohoku/bert-large-japanese-char). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters. You can fine-tune `bert-large-japanese-char-extended` for downstream tasks, such as POS-tagging, dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended")
```
|
KoichiYasuoka/roberta-classical-chinese-base-char | 2021-06-13T13:42:26.000Z | [
"pytorch",
"roberta",
"masked-lm",
"lzh",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"license:apache-2.0",
"fill-mask",
"pipeline_tag:fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| KoichiYasuoka | 797 | transformers | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "masked-lm"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "孟子[MASK]梁惠王"
---
# roberta-classical-chinese-base-char
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-base-char` for downstream tasks, such as sentencization, POS-tagging, dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char")
```
## See Also
[SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
|
KoichiYasuoka/roberta-classical-chinese-large-char | 2021-06-13T13:43:00.000Z | [
"pytorch",
"roberta",
"masked-lm",
"lzh",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"license:apache-2.0",
"fill-mask",
"pipeline_tag:fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| KoichiYasuoka | 698 | transformers | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "masked-lm"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "孟子[MASK]梁惠王"
---
# roberta-classical-chinese-large-char
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-large-char` for downstream tasks, such as sentencization, POS-tagging, dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
```
## See Also
[SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
|
Konstantinos/BERTaTweetGR | 2021-05-20T12:02:31.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"el",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| Konstantinos | 24 | transformers | ---
language: el
widget:
- text: "μπαινω στο <mask> και τι να δω."
---
# Α lite RoBERTa fill mask model trained mostly in greek tweets
The training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018.
The model has been trained to support the work for the paper [Multimodal Hate Speech Detection in Greek Social Media (preprint v1)](https://www.preprints.org/manuscript/202103.0390/v1 )
## Load the pretrained model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Konstantinos/BERTaTweetGR")
model = AutoModel.from_pretrained("Konstantinos/BERTaTweetGR")
```
|
Koraiem/test_1 | 2021-03-24T01:57:45.000Z | []
| [
".gitattributes"
]
| Koraiem | 0 | |||
Kyuyoung11/haremotions-v1 | 2021-06-14T07:09:20.000Z | []
| [
".gitattributes",
"README.md"
]
| Kyuyoung11 | 425 | |||
Kyuyoung11/haremotions-v2 | 2021-06-14T07:09:14.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
]
| Kyuyoung11 | 172 | transformers | ||
Kyuyoung11/haremotions-v3 | 2021-06-14T18:43:46.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| Kyuyoung11 | 48 | transformers | ||
LIAMF-USP/aristo-roberta | 2021-05-20T12:04:27.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"multiple-choice",
"english",
"dataset:race",
"dataset:ai2_arc",
"dataset:openbookqa",
"transformers",
"license:mit"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| LIAMF-USP | 33 | transformers | ---
language: "english"
license: "mit"
datasets:
- race
- ai2_arc
- openbookqa
metrics:
- accuracy
---
# Roberta Large Fine Tuned on RACE
## Model description
This model follows the implementation by Allen AI team about [Aristo Roberta V7 Model](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) given in [ARC Challenge](https://leaderboard.allenai.org/arc/submissions/public)
#### How to use
```python
import datasets
from transformers import RobertaTokenizer
from transformers import RobertaForMultipleChoice
tokenizer = RobertaTokenizer.from_pretrained(
"LIAMF-USP/aristo-roberta")
model = RobertaForMultipleChoice.from_pretrained(
"LIAMF-USP/aristo-roberta")
dataset = datasets.load_dataset(
"arc",,
split=["train", "validation", "test"],
)
training_examples = dataset[0]
evaluation_examples = dataset[1]
test_examples = dataset[2]
example=training_examples[0]
example_id = example["example_id"]
question = example["question"]
label_example = example["answer"]
options = example["options"]
if label_example in ["A", "B", "C", "D", "E"]:
label_map = {label: i for i, label in enumerate(
["A", "B", "C", "D", "E"])}
elif label_example in ["1", "2", "3", "4", "5"]:
label_map = {label: i for i, label in enumerate(
["1", "2", "3", "4", "5"])}
else:
print(f"{label_example} not found")
while len(options) < 5:
empty_option = {}
empty_option['option_context'] = ''
empty_option['option_text'] = ''
options.append(empty_option)
choices_inputs = []
for ending_idx, option in enumerate(options):
ending = option["option_text"]
context = option["option_context"]
if question.find("_") != -1:
# fill in the banks questions
question_option = question.replace("_", ending)
else:
question_option = question + " " + ending
inputs = tokenizer(
context,
question_option,
add_special_tokens=True,
max_length=MAX_SEQ_LENGTH,
padding="max_length",
truncation=True,
return_overflowing_tokens=False,
)
if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0:
logging.warning(f"Question: {example_id} with option {ending_idx} was truncated")
choices_inputs.append(inputs)
label = label_map[label_example]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs]
# as the senteces follow the same structure, just one of them is
# necessary to check
if "attention_mask" in choices_inputs[0]
else None
)
example_encoded = {
"example_id": example_id,
"input_ids": input_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"label": label
}
output = model(**example_encoded)
```
## Training data
the Training data was the same as proposed [here](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0)
The only diferrence was the hypeparameters of RACE fine tuned model, which were reported [here](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race#eval-results)
## Training procedure
It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following:
| Hyperparameter | Value |
|:----:|:----:|
| adam_beta1 | 0.9 |
| adam_beta2 | 0.98 |
| adam_epsilon | 1.000e-8 |
| eval_batch_size | 16 |
| train_batch_size | 4 |
| fp16 | True |
| gradient_accumulation_steps | 4 |
| learning_rate | 0.00001 |
| warmup_steps | 0.06 |
| max_length | 256 |
| epochs | 4 |
The other parameters were the default ones from [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) and [Trainer Arguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
## Eval results:
| Dataset Acc | Challenge Test |
|:----:|:----:|
| | 65.358 |
**The model was trained with a TITAN RTX**
|
|
LIAMF-USP/roberta-large-finetuned-race | 2021-05-20T12:08:36.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"multiple-choice",
"english",
"dataset:race",
"transformers",
"license:mit"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| LIAMF-USP | 1,410 | transformers | ---
language: "english"
license: "mit"
datasets:
- race
metrics:
- accuracy
---
# Roberta Large Fine Tuned on RACE
## Model description
This model is a fine-tuned model of Roberta-large applied on RACE
#### How to use
```python
import datasets
from transformers import RobertaTokenizer
from transformers import RobertaForMultipleChoice
tokenizer = RobertaTokenizer.from_pretrained(
"LIAMF-USP/roberta-large-finetuned-race")
model = RobertaForMultipleChoice.from_pretrained(
"LIAMF-USP/roberta-large-finetuned-race")
dataset = datasets.load_dataset(
"race",
"all",
split=["train", "validation", "test"],
)training_examples = dataset[0]
evaluation_examples = dataset[1]
test_examples = dataset[2]
example=training_examples[0]
example_id = example["example_id"]
question = example["question"]
context = example["article"]
options = example["options"]
label_example = example["answer"]
label_map = {label: i
for i, label in enumerate(["A", "B", "C", "D"])}
choices_inputs = []
for ending_idx, (_, ending) in enumerate(
zip(context, options)):
if question.find("_") != -1:
# fill in the banks questions
question_option = question.replace("_", ending)
else:
question_option = question + " " + ending
inputs = tokenizer(
context,
question_option,
add_special_tokens=True,
max_length=MAX_SEQ_LENGTH,
padding="max_length",
truncation=True,
return_overflowing_tokens=False,
)
label = label_map[label_example]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs]
# as the senteces follow the same structure,
#just one of them is necessary to check
if "attention_mask" in choices_inputs[0]
else None
)
example_encoded = {
"example_id": example_id,
"input_ids": input_ids,
"attention_mask": attention_mask,
"label": label,
}
output = model(**example_encoded)
```
## Training data
The initial model was [roberta large model](https://huggingface.co/roberta-large) which was then fine-tuned on [RACE dataset](https://www.cs.cmu.edu/~glai1/data/race/)
## Training procedure
It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following:
| Hyperparameter | Value |
|:----:|:----:|
| adam_beta1 | 0.9 |
| adam_beta2 | 0.98 |
| adam_epsilon | 1.000e-8 |
| eval_batch_size | 32 |
| train_batch_size | 1 |
| fp16 | True |
| gradient_accumulation_steps | 16 |
| learning_rate | 0.00001 |
| warmup_steps | 1000 |
| max_length | 512 |
| epochs | 4 |
## Eval results:
| Dataset Acc | Eval | All Test |High School Test |Middle School Test |
|:----:|:----:|:----:|:----:|:----:|
| | 85.2 | 84.9|83.5|88.0|
**The model was trained with a Tesla V100-PCIE-16GB** |
|
LTNguyen/stsb_vn | 2020-11-19T06:32:28.000Z | []
| [
".gitattributes"
]
| LTNguyen | 0 | |||
LTNguyen/stsb_vv | 2020-11-19T06:47:19.000Z | []
| [
".gitattributes"
]
| LTNguyen | 0 | |||
Laeyoung/BTS-comments-generator | 2021-06-08T07:59:07.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"handler.py",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"MAR-INF/MANIFEST.json"
]
| Laeyoung | 50 | transformers | ### Model information
* Fine tuning dataset: https://www.kaggle.com/seungguini/bts-youtube-comments
* Base model: GPT2 Small
* Epoch: 5
* API page: [Ainize](https://ainize.ai/teachable-ainize/gpt2-train?branch=train/cv695m9g40av0cdabuqp)
* Demo page: [End-point](https://kubecon-tabtab-ainize-team.endpoint.ainize.ai/?modelUrl=https://train-cv695m9g40av0cdabuqp-gpt2-train-teachable-ainize.endpoint.ainize.ai/predictions/gpt-2-en-small-finetune)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
* Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
* Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
LeBenchmark/wav2vec2-FR-M-base | 2021-05-03T15:07:13.000Z | [
"pytorch",
"wav2vec2",
"fr",
"transformers",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"checkpoint_best.pt",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| LeBenchmark | 201 | transformers | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French dataset containing spontaneous, read and broadcasted speech. For more information on the different benchmark that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Not Available yet]()
## wav2vec2-FR-M-Large: model and data descriptions
We release four different models that can be found under our HuggingFace organisation. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (*S*) and medium (*M*) corpus. A larger one shoud come later. In short:
- [wav2vec2-FR-M-Large](#): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-M-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-M-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-S-Large](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-S-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the apache-2.0 licence. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpus that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
Reference to come
```
|
|
LeBenchmark/wav2vec2-FR-M-large | 2021-05-03T14:30:27.000Z | [
"pytorch",
"wav2vec2",
"fr",
"transformers",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"checkpoint_best.pt",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| LeBenchmark | 210 | transformers | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French dataset containing spontaneous, read and broadcasted speech. For more information on the different benchmark that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Not Available yet]()
## wav2vec2-FR-M-Large: model and data descriptions
We release four different models that can be found under our HuggingFace organisation. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (*S*) and medium (*M*) corpus. A larger one shoud come later. In short:
- [wav2vec2-FR-M-Large](#): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-M-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-M-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-S-Large](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-S-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the apache-2.0 licence. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpus that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
Reference to come
```
|
|
LeBenchmark/wav2vec2-FR-S-base | 2021-05-03T15:08:01.000Z | [
"pytorch",
"wav2vec2",
"fr",
"transformers",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"checkpoint_best.pt",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| LeBenchmark | 9 | transformers | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French dataset containing spontaneous, read and broadcasted speech. For more information on the different benchmark that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Not Available yet]()
## wav2vec2-FR-M-Large: model and data descriptions
We release four different models that can be found under our HuggingFace organisation. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (*S*) and medium (*M*) corpus. A larger one shoud come later. In short:
- [wav2vec2-FR-M-Large](#): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-M-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-M-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-S-Large](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-S-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the apache-2.0 licence. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpus that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
Reference to come
```
|
|
LeBenchmark/wav2vec2-FR-S-large | 2021-05-03T15:03:44.000Z | [
"pytorch",
"wav2vec2",
"fr",
"transformers",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"checkpoint_best.pt",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin"
]
| LeBenchmark | 11 | transformers | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French dataset containing spontaneous, read and broadcasted speech. For more information on the different benchmark that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Not Available yet]()
## wav2vec2-FR-M-Large: model and data descriptions
We release four different models that can be found under our HuggingFace organisation. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (*S*) and medium (*M*) corpus. A larger one shoud come later. In short:
- [wav2vec2-FR-M-Large](#): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-M-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-M-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-S-Large](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-S-Base](https://huggingface.co/LeBenchmark/wav2vec2-FR-S-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the apache-2.0 licence. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpus that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
Reference to come
```
|
|
LeBoogle/CibaBot | 2021-06-03T01:20:43.000Z | []
| [
".gitattributes",
"README.md"
]
| LeBoogle | 0 | |||
Legendarysoren/Twitter | 2021-02-24T05:28:59.000Z | []
| [
".gitattributes",
"README.md"
]
| Legendarysoren | 0 | |||
LeoCordoba/beto2beto-ccnews-titles-es | 2021-05-17T23:00:00.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"es",
"dataset:ccnews-titles-es",
"transformers",
"summarization",
"spanish",
"beto2beto",
"license:apache-2.0",
"text2text-generation"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin"
]
| LeoCordoba | 41 | transformers |
---
language: es
tags:
- summarization
- spanish
- beto2beto
- encoder-decoder
license: apache-2.0
datasets:
- ccnews-titles-es
model-index:
- name: beto2beto-ccnews-titles-es
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "CCNEWS-titles (Spanish)"
widget:
- text: |
La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno“, los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña.
---
## Hyperparameters
{
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 256
}
## Usage
## Results
| key | value |
| --- | ----- |
| eval loss | 4.539857387542725|
| eval_rouge1 |23.7478 |
| eval_rouge2 |7.3616 |
| eval_rougeL |20.6615 |
| eval_rougeLsum |20.7371 |
| eval_gen_len| 16.1806|
|test loss | 4.515065670013428|
| test_rouge1 | 23.7415|
| test_rouge2 | 7.3548|
| test_rougeL | 20.746|
| test_rougeLsum | 20.8149|
| test_gen_len| 16.1926|
|
LeoCordoba/beto2beto-mlsum | 2021-04-19T13:56:18.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"es",
"dataset:mlsum - es",
"transformers",
"summarization",
"spanish",
"beto",
"license:apache-2.0",
"text2text-generation"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"eval_data.json",
"pytorch_model.bin",
"special_tokens_map.json",
"test_data.json",
"tokenizer.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
]
| LeoCordoba | 113 | transformers |
---
language: es
tags:
- summarization
- spanish
- encoder-decoder
- beto
license: apache-2.0
datasets:
- mlsum - es
model-index:
- name: beto2beto-mlsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "MLSUM: MultiLingual SUMmarization dataset (Spanish)"
type: mlsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 26.1256
- name: Validation ROGUE-2
type: rogue-2
value: 9.2552
- name: Validation ROGUE-L
type: rogue-l
value: 21.4899
- name: Validation ROGUE-Lsum
type: rogue-lsum
value: 21.8194
- name: Test ROGUE-1
type: rogue-1
value: 25.8639
- name: Test ROGUE-2
type: rogue-2
value: 8.911
- name: Test ROGUE-L
type: rogue-l
value: 21.2426
- name: Test ROGUE-Lsum
type: rogue-lsum
value: 21.5859
widget:
- text: |
La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña.
---
## beto2beto-mlsum
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
For more information look at:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"num_train_epochs": 10,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
## Results
| key | value |
| --- | ----- |
| validation_loss | 2.5021677017211914 |
| validation_rouge1 | 26.1256 |
| validation_rouge2 | 9.2552 |
| validation_rougeL | 21.4899 |
| validation_rougeLsum | 21.8194 |
| test_loss | 2.57672381401062 |
| test_rouge1 | 25.8639 |
| test_rouge2 | 8.911 |
| test_rougeL | 21.2426 |
| test_rougeLsum | 21.5859 |
|
LeoCordoba/beto2beto | 2021-05-10T00:57:42.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"es",
"dataset:cc-news-es",
"transformers",
"text-generation",
"spanish",
"beto",
"license:apache-2.0",
"text2text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt",
".ipynb_checkpoints/README-checkpoint.md"
]
| LeoCordoba | 88 | transformers | ---
language: es
tags:
- text-generation
- spanish
- encoder-decoder
- beto
license: apache-2.0
datasets:
- cc-news-es
model-index:
- name: beto2beto
results:
widget:
- text: |
La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación.
---
## beto2beto
Add description.
Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40•Decoder max length: 128
## Hyperparameters
## Usage
## Results
| key | value |
| --- | ----- |
| test_loss | 2.65148806571960452 |
|
LeoCordoba/mt5-small-ccnews-titles-es | 2021-05-10T01:31:40.000Z | [
"pytorch",
"mt5",
"seq2seq",
"es",
"dataset:ccnews-titles-es",
"transformers",
"summarization",
"spanish",
"license:apache-2.0",
"text2text-generation"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| LeoCordoba | 63 | transformers | ---
language: es
tags:
- summarization
- mt5
- spanish
license: apache-2.0
datasets:
- ccnews-titles-es
model-index:
- name: mt5-small-ccnesews-titles-es
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "CCNEWS-titles (Spanish)"
widget:
- text: "La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno“, los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña."
---
## Hyperparameters
{
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 128
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-ccnews-titles-es")
summarizer(article, min_length=5, max_length=64)
```
## Results
| metric | score |
| --- | ----- |
| eval_loss | 2.879085063934326 |
| eval_rouge1 | 22.6623 |
| eval_rouge2 | 7.7894 |
| eval_rougeL | 19.8015, |
| eval_rougeLsum | 19.8092 |
| eval_gen_len | 17.1839 |
| test_loss | 2.878429412841797 |
| test_rouge1 | 22.9263 |
| test_rouge2 | 7.9146 |
| test_rougeL | 20.0272 |
| test_rougeLsum | 20.0387 |
| test_gen_len | 17.1696 | |
LeoCordoba/mt5-small-mlsum | 2021-04-13T12:53:38.000Z | [
"pytorch",
"mt5",
"seq2seq",
"es",
"dataset:mlsum - es",
"transformers",
"summarization",
"sagemaker",
"spanish",
"license:apache-2.0",
"text2text-generation"
]
| summarization | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"eval_results.json",
"metadata.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"test_generations.txt",
"test_results.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin"
]
| LeoCordoba | 223 | transformers |
---
language: es
tags:
- summarization
- sagemaker
- mt5
- spanish
license: apache-2.0
datasets:
- mlsum - es
model-index:
- name: mt5-small-mlsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "MLSUM: MultiLingual SUMmarization dataset (Spanish)"
type: mlsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 26.4352
- name: Validation ROGUE-2
type: rogue-2
value: 8.9293
- name: Validation ROGUE-L
type: rogue-l
value: 21.2622
- name: Validation ROGUE-LSUM
type: rogue-lsum
value: 21.5518
- name: Test ROGUE-1
type: rogue-1
value: 26.0756
- name: Test ROGUE-2
type: rogue-2
value: 8.4669
- name: Test ROGUE-L
type: rogue-l
value: 20.8167
- name: Validation ROGUE-LSUM
type: rogue-lsum
value: 21.0822
widget:
- text: "La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno“, los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña."
---
## mt5-small-mlsum
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
For more information look at:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 10,
"output_dir": "/opt/ml/checkpoints",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"save_strategy": "epoch",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-mlsum")
summarizer(article, min_length=5, max_length=64)
```
result: [{'summary_text': 'El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche'}]
## Results
| metric | score |
| --- | ----- |
| eval_rouge1 | 26.4352 |
| eval_rouge2 | 8.9293 |
| eval_rougeL | 21.2622 |
| eval_rougeLsum | 21.5518 |
| test_rouge1 | 26.0756 |
| test_rouge2 | 8.4669 |
| test_rougeL | 20.8167 |
| test_rougeLsum | 21.0822 |
|
Liam/NRL-full | 2021-05-19T11:18:42.000Z | [
"tf",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| Liam | 12 | transformers | ||
Liam/NRL | 2021-05-19T11:19:08.000Z | [
"tf",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| Liam | 18 | transformers | ||
LilaBoualili/bert-pre-doc | 2021-05-20T09:58:43.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| LilaBoualili | 6 | transformers | |
LilaBoualili/bert-pre-pair | 2021-05-20T09:59:45.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| LilaBoualili | 11 | transformers | |
LilaBoualili/bert-sim-doc | 2021-05-20T09:57:43.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| LilaBoualili | 6 | transformers | |
LilaBoualili/bert-sim-pair | 2021-05-18T21:26:27.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| LilaBoualili | 36 | transformers | At its core it uses an BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
|
LilaBoualili/bert-vanilla | 2021-05-18T21:27:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| LilaBoualili | 19 | transformers | At its core it uses a BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. |
LilaBoualili/electra-pre-doc | 2021-05-18T15:04:09.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| LilaBoualili | 6 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.