Model Card for f5-tts-hakka-finetune
Model Details
F5-TTS finetune on all formosan data (ithuan, fb ilrdf dict, klokah) without samples only one word, using ipa as input.
g2p from this repo.
Training Details
- learning rate: 0.00001
- batch size per gpu: 6400
- batch size type: frame
- max samples: 64
- grad accumulation steps: 1
- max grad norm: 1
- epochs: 210 (1704780 steps, current 1081600), after 1081600 loss rise
- num warmup updates: 27040
Model Sources
- Repository: https://github.com/SWivid/F5-TTS
- Paper: https://arxiv.org/abs/2410.06885
Uses
please refer source repo
Demo
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ithuan/f5-tts-formosan-all-finetune
Base model
SWivid/F5-TTS