Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Whisper Base Bn LoRA Adapter (10k steps) - by BanglaBridge

This model is a PEFT LoRA fine-tuned version of openai/whisper-base on the Common Voice 17.0 dataset. After 10k steps it achieves the following results on the test set:

  • Wer: 46.25395
  • Normalized Wer: 23.31617

Refer to the 20k full-trained adapter repository for more details on the finetuning: banglabridge/base-bn-lora-adapter

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.1
  • Tokenizers 0.19.1
  • Peft 0.10.0
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Da4ThEdge/base-bn-lora-adapter-cp10k

Adapter
(30)
this model
Finetunes
1 model

Dataset used to train Da4ThEdge/base-bn-lora-adapter-cp10k

Evaluation results