πŸ“Έ Adasah - Qwen 2.5 3B (4-bit) Fine-tuned on Arabic Photo Q&A and Descriptions

Demo Video:

Adasah - IOS App

App Store

Warning - The app downloads a 2GB model, so it takes some time for the first time.

Forked from Huggingsnap

https://github.com/huggingface/HuggingSnap

Model Name: Adasah
Base Model: Qwen/Qwen2.5-VL-3B-Instruct Quantization: 4-bit (GGUF)
Platform: iOS (mobile-compatible)
Language: Arabic (translated from English)
Use Case: Arabic Visual Q&A and Photo Description Understanding


🧠 Model Overview

Adasah is a fine-tuned variant of the Qwen 2.5 3B base model, optimized for Arabic language understanding in visual contexts. The model was trained on a custom dataset consisting of English visual question-answer pairs and photo descriptions translated into Arabic, allowing it to:

  • Answer Arabic questions about images
  • Generate Arabic descriptions of visual content
  • Serve as a mobile assistant for Arabic-speaking users

The model is quantized to 4-bit to ensure smooth on-device performance on iOS apps.

πŸ“± Mobile Optimization

The model is quantized using 4-bit precision to make it lightweight and suitable for on-device inference in:

  • iOS apps
  • Offline-first mobile experiences
  • Arabic language educational or accessibility tools

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model NAMAA-Space/Adasah-QA-0.1-3B-Instruct-merged-4bits --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
433
Safetensors
Model size
707M params
Tensor type
FP16
Β·
U32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for NAMAA-Space/Adasah-QA-0.1-3B-Instruct-merged-4bits

Finetuned
(207)
this model

Collection including NAMAA-Space/Adasah-QA-0.1-3B-Instruct-merged-4bits