OpenAI Whisper base model converted to ONNX format for onnx-asr.

Install onnx-asr

pip install onnx-asr[cpu,hub]

Load whisper-base model and recognize wav file

import onnx_asr
model = onnx_asr.load_model("whisper-base")
print(model.recognize("test.wav"))

Model export

Read onnxruntime instruction for convert Whisper to ONNX.

Download model and export with Beam Search and Forced Decoder Input Ids:

python3 -m onnxruntime.transformers.models.whisper.convert_to_onnx -m openai/whisper-base --output ./whisper-onnx --use_forced_decoder_ids --optimize_onnx --precision fp32

Save tokenizer config

from transformers import WhisperTokenizer

processor = WhisperTokenizer.from_pretrained("openai/whisper-base")
processor.save_pretrained("whisper-onnx")
Downloads last month
245
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for istupakov/whisper-base-onnx

Quantized
(14)
this model

Space using istupakov/whisper-base-onnx 1

Collection including istupakov/whisper-base-onnx