whisper-finetuned-v3_50e_augment_new
This model is a fine-tuned version of openai/whisper-large-v3-turbo on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1259
- Wer: 52.6441
- Cer: 28.4843
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
0.0997 | 1.0 | 1817 | 0.0970 | 67.6561 | 32.4126 |
0.0625 | 2.0 | 3634 | 0.0800 | 63.1184 | 31.0781 |
0.043 | 3.0 | 5451 | 0.0778 | 61.1054 | 30.2406 |
0.0356 | 4.0 | 7268 | 0.0727 | 58.4101 | 29.7911 |
0.0246 | 5.0 | 9085 | 0.0784 | 58.2736 | 30.1020 |
0.0185 | 6.0 | 10902 | 0.0795 | 58.9901 | 30.2346 |
0.0152 | 7.0 | 12719 | 0.0823 | 56.7724 | 29.5693 |
0.0109 | 8.0 | 14536 | 0.0864 | 56.8748 | 29.2486 |
0.0092 | 9.0 | 16353 | 0.0834 | 55.8854 | 29.1001 |
0.0079 | 10.0 | 18170 | 0.0901 | 56.3630 | 29.2129 |
0.0057 | 11.0 | 19987 | 0.0934 | 56.1242 | 29.3694 |
0.0057 | 12.0 | 21804 | 0.0965 | 56.8748 | 29.5476 |
0.007 | 13.0 | 23621 | 0.0974 | 56.1583 | 29.4901 |
0.0045 | 14.0 | 25438 | 0.1018 | 57.1477 | 29.2387 |
0.0037 | 15.0 | 27255 | 0.0958 | 56.1583 | 29.7475 |
0.0036 | 16.0 | 29072 | 0.0966 | 55.8171 | 29.5258 |
0.0047 | 17.0 | 30889 | 0.1012 | 54.8959 | 29.1199 |
0.0034 | 18.0 | 32706 | 0.0978 | 56.8066 | 28.8981 |
0.003 | 19.0 | 34523 | 0.1010 | 55.5442 | 29.0862 |
0.0034 | 20.0 | 36340 | 0.0981 | 55.7489 | 29.0506 |
0.0027 | 21.0 | 38157 | 0.1034 | 55.2371 | 28.9239 |
0.0021 | 22.0 | 39974 | 0.0997 | 54.1453 | 28.6566 |
0.0023 | 23.0 | 41791 | 0.1038 | 55.6124 | 29.1813 |
0.0016 | 24.0 | 43608 | 0.1049 | 54.8959 | 29.0209 |
0.0024 | 25.0 | 45425 | 0.1033 | 54.8277 | 29.4981 |
0.0016 | 26.0 | 47242 | 0.1031 | 55.4418 | 28.8387 |
0.0016 | 27.0 | 49059 | 0.1109 | 54.2136 | 29.2308 |
0.001 | 28.0 | 50876 | 0.1076 | 54.2136 | 29.1219 |
0.0008 | 29.0 | 52693 | 0.1109 | 55.3736 | 29.4406 |
0.0014 | 30.0 | 54510 | 0.1069 | 53.5995 | 28.7318 |
0.0011 | 31.0 | 56327 | 0.1112 | 55.0324 | 28.9932 |
0.001 | 32.0 | 58144 | 0.1131 | 55.8512 | 29.3456 |
0.0011 | 33.0 | 59961 | 0.1102 | 54.6912 | 29.1516 |
0.0006 | 34.0 | 61778 | 0.1144 | 53.9748 | 28.9852 |
0.0006 | 35.0 | 63595 | 0.1131 | 54.7936 | 29.2506 |
0.0003 | 36.0 | 65412 | 0.1148 | 54.5889 | 28.7358 |
0.0004 | 37.0 | 67229 | 0.1094 | 53.4630 | 28.8150 |
0.0006 | 38.0 | 69046 | 0.1104 | 53.4630 | 28.5259 |
0.0003 | 39.0 | 70863 | 0.1145 | 53.6336 | 28.6368 |
0.0002 | 40.0 | 72680 | 0.1160 | 53.0194 | 28.6962 |
0.0001 | 41.0 | 74497 | 0.1186 | 53.4971 | 28.3655 |
0.0002 | 42.0 | 76314 | 0.1112 | 52.6100 | 28.4467 |
0.0001 | 43.0 | 78131 | 0.1168 | 52.9853 | 28.5615 |
0.0002 | 44.0 | 79948 | 0.1192 | 52.5077 | 28.6368 |
0.0 | 45.0 | 81765 | 0.1236 | 52.8830 | 28.5675 |
0.0001 | 46.0 | 83582 | 0.1234 | 52.9171 | 28.3556 |
0.0001 | 47.0 | 85399 | 0.1229 | 52.6100 | 28.5021 |
0.0 | 48.0 | 87216 | 0.1260 | 52.6783 | 28.5536 |
0.0 | 49.0 | 89033 | 0.1257 | 52.6783 | 28.4902 |
0.0 | 49.9727 | 90800 | 0.1259 | 52.6441 | 28.4843 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for tranha/whisper-finetuned-v3_50e_augment_new
Base model
openai/whisper-large-v3
Finetuned
openai/whisper-large-v3-turbo