A small hobby project trained in a Kaggle notebook using their free P100 GPUs. Was curious about if you could train whisper-tiny to perform decently if you specialized it for a single language, i.e. danish in this case. The TL;DR is that the results are not great :)

from transformers import pipeline

pipe = pipeline("automatic-speech-recognition", 
                model="rasgaard/whisper-tiny.da")
Downloads last month
47
Safetensors
Model size
57.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rasgaard/whisper-tiny.da

Finetuned
(1593)
this model

Dataset used to train rasgaard/whisper-tiny.da