BLIP for RSICD image captioning:
blip-image-captioning-base
model has been finetuned on thersicd
dataset. Training parameters used are as follows:- learning_rate = 5e-7
- optimizer = AdamW
- scheduler = ReduceLROnPlateau
- epochs = 5
- More details (demo, testing, evaluation, metrics) available at
github repo
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support