πΈ Adasah - Qwen 2.5 3B (4-bit) Fine-tuned on Arabic Photo Q&A and Descriptions
Demo Video:
Adasah - IOS App
Warning - The app downloads a 2GB model, so it takes some time for the first time.
Forked from Huggingsnap
https://github.com/huggingface/HuggingSnap
Model Name: Adasah
Base Model: Qwen/Qwen2.5-VL-3B-Instruct
Quantization: 4-bit (GGUF)
Platform: iOS (mobile-compatible)
Language: Arabic (translated from English)
Use Case: Arabic Visual Q&A and Photo Description Understanding
π§ Model Overview
Adasah is a fine-tuned variant of the Qwen 2.5 3B base model, optimized for Arabic language understanding in visual contexts. The model was trained on a custom dataset consisting of English visual question-answer pairs and photo descriptions translated into Arabic, allowing it to:
- Answer Arabic questions about images
- Generate Arabic descriptions of visual content
- Serve as a mobile assistant for Arabic-speaking users
The model is quantized to 4-bit to ensure smooth on-device performance on iOS apps.
π± Mobile Optimization
The model is quantized using 4-bit precision to make it lightweight and suitable for on-device inference in:
- iOS apps
- Offline-first mobile experiences
- Arabic language educational or accessibility tools
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model NAMAA-Space/Adasah-QA-0.1-3B-Instruct-merged-4bits --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 433
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for NAMAA-Space/Adasah-QA-0.1-3B-Instruct-merged-4bits
Base model
Qwen/Qwen2.5-VL-3B-InstructCollection including NAMAA-Space/Adasah-QA-0.1-3B-Instruct-merged-4bits
Collection
π΅π’πππ‘ ππ π‘βπ πππ€ππππ’π ππ€ππ2 ππΏ 2π΅ πππ ππππ-π‘π’πππ ππ ππ π΄πππππ ππΆπ
πππ‘ππ ππ‘, ππππ π£0.1 ππ
β’
6 items
β’
Updated
β’
3