the output of the model is json. that's what is crazy about it in my opinion
Mohamed Rashad PRO
MohamedRashad
AI & ML interests
Computer Vision, Robotics, Natural Language Processing
Recent Activity
updated
a Space
3 days ago
Navid-AI/The-Arabic-Rag-Leaderboard
new activity
8 days ago
Navid-AI/The-Arabic-Rag-Leaderboard:Model submission result
Organizations

replied to
their
post
1 day ago
Post
1479
If someone is interested in trying the new
rednote-hilab/dots.ocr model. I made this space for you:
MohamedRashad/Dots-OCR
MohamedRashad/Dots-OCR

posted
an
update
19 days ago
Post
1479
If someone is interested in trying the new
rednote-hilab/dots.ocr model. I made this space for you:
MohamedRashad/Dots-OCR
MohamedRashad/Dots-OCR
Post
1832
For anyone who wants to try the new Voxtral models, you can do this from here:
MohamedRashad/Voxtral
Also you can find the transformers version of them here:
MohamedRashad/Voxtral-Mini-3B-2507-transformers
MohamedRashad/Voxtral-Small-24B-2507-transformers
MohamedRashad/Voxtral
Also you can find the transformers version of them here:
MohamedRashad/Voxtral-Mini-3B-2507-transformers
MohamedRashad/Voxtral-Small-24B-2507-transformers

posted
an
update
about 1 month ago
Post
1832
For anyone who wants to try the new Voxtral models, you can do this from here:
MohamedRashad/Voxtral
Also you can find the transformers version of them here:
MohamedRashad/Voxtral-Mini-3B-2507-transformers
MohamedRashad/Voxtral-Small-24B-2507-transformers
MohamedRashad/Voxtral
Also you can find the transformers version of them here:
MohamedRashad/Voxtral-Mini-3B-2507-transformers
MohamedRashad/Voxtral-Small-24B-2507-transformers

posted
an
update
2 months ago
Post
1854
I think we just got the best Image to Markdown VLM out there and it's hosted here:
MohamedRashad/Nanonets-OCR
MohamedRashad/Nanonets-OCR
Post
353
I just updated an old (non working) space i had with the implementation of a cool research paper named UniRig
The idea is that you upload any 3d model and it rigs it for you with correct armature and the skinning process to give you the final model fully rigged and ready to be used.
Check it out here:
MohamedRashad/UniRig
The idea is that you upload any 3d model and it rigs it for you with correct armature and the skinning process to give you the final model fully rigged and ready to be used.
Check it out here:
MohamedRashad/UniRig

posted
an
update
3 months ago
Post
353
I just updated an old (non working) space i had with the implementation of a cool research paper named UniRig
The idea is that you upload any 3d model and it rigs it for you with correct armature and the skinning process to give you the final model fully rigged and ready to be used.
Check it out here:
MohamedRashad/UniRig
The idea is that you upload any 3d model and it rigs it for you with correct armature and the skinning process to give you the final model fully rigged and ready to be used.
Check it out here:
MohamedRashad/UniRig

posted
an
update
4 months ago
Post
1068
I have processed and cleaned the famous SADA2022 dataset from SADIA for Arabic ASR and other related tasks and uploaded it here:
MohamedRashad/SADA22
Edit:
I also added another dataset from SADIA named SCC22
MohamedRashad/SCC22
MohamedRashad/SADA22
Edit:
I also added another dataset from SADIA named SCC22
MohamedRashad/SCC22

replied to
their
post
4 months ago
Speech data in audio and text format

replied to
their
post
4 months ago
Start with gathering high quality data first. This is by far the biggest hurdle against TTS systems out there.

posted
an
update
5 months ago
Post
2702
I collected the recitations of the holy quran from 20 different reciters and uploaded the full dataset here:
MohamedRashad/Quran-Recitations
Check it out π₯·
MohamedRashad/Quran-Recitations
Check it out π₯·
Post
2145
For those interested in trying the new
canopylabs/orpheus-3b-0.1-ft model i made a space for you:
MohamedRashad/Orpheus-TTS
MohamedRashad/Orpheus-TTS

posted
an
update
5 months ago
Post
2145
For those interested in trying the new
canopylabs/orpheus-3b-0.1-ft model i made a space for you:
MohamedRashad/Orpheus-TTS
MohamedRashad/Orpheus-TTS
Post
3522
I think we have released the best Arabic model under 25B at least based on https://huggingface.co/spaces/inceptionai/AraGen-Leaderboard
Yehia = ALLaM-AI/ALLaM-7B-Instruct-preview + GRPO
and its ranked number one model under the 25B parameter size mark.
Now, i said "i think" not "i am sure" because this model used the same metric of evaluation the AraGen developers use (the 3C3H) as a reward model to improve its responses and this sparks the question. Is this something good for users or is it another type of overfitting that we don't want ?
I don't know if this is a good thing or a bad thing but what i know is that you can try it from here:
Navid-AI/Yehia-7B-preview
or Download it for your personal experiments from here:
Navid-AI/Yehia-7B-preview
Ramadan Kareem π
Yehia = ALLaM-AI/ALLaM-7B-Instruct-preview + GRPO
and its ranked number one model under the 25B parameter size mark.
Now, i said "i think" not "i am sure" because this model used the same metric of evaluation the AraGen developers use (the 3C3H) as a reward model to improve its responses and this sparks the question. Is this something good for users or is it another type of overfitting that we don't want ?
I don't know if this is a good thing or a bad thing but what i know is that you can try it from here:
Navid-AI/Yehia-7B-preview
or Download it for your personal experiments from here:
Navid-AI/Yehia-7B-preview
Ramadan Kareem π

posted
an
update
6 months ago
Post
3522
I think we have released the best Arabic model under 25B at least based on https://huggingface.co/spaces/inceptionai/AraGen-Leaderboard
Yehia = ALLaM-AI/ALLaM-7B-Instruct-preview + GRPO
and its ranked number one model under the 25B parameter size mark.
Now, i said "i think" not "i am sure" because this model used the same metric of evaluation the AraGen developers use (the 3C3H) as a reward model to improve its responses and this sparks the question. Is this something good for users or is it another type of overfitting that we don't want ?
I don't know if this is a good thing or a bad thing but what i know is that you can try it from here:
Navid-AI/Yehia-7B-preview
or Download it for your personal experiments from here:
Navid-AI/Yehia-7B-preview
Ramadan Kareem π
Yehia = ALLaM-AI/ALLaM-7B-Instruct-preview + GRPO
and its ranked number one model under the 25B parameter size mark.
Now, i said "i think" not "i am sure" because this model used the same metric of evaluation the AraGen developers use (the 3C3H) as a reward model to improve its responses and this sparks the question. Is this something good for users or is it another type of overfitting that we don't want ?
I don't know if this is a good thing or a bad thing but what i know is that you can try it from here:
Navid-AI/Yehia-7B-preview
or Download it for your personal experiments from here:
Navid-AI/Yehia-7B-preview
Ramadan Kareem π
Post
3338
Today is a big day for the Arabic Language,
We have Navid-AI/The-Arabic-Rag-Leaderboard,
an Update for OALL/Open-Arabic-LLM-Leaderboard
and the release of atlasia/darija-chatbot-arena
All of this announcements was under 12 hours of time π€―
We have Navid-AI/The-Arabic-Rag-Leaderboard,
an Update for OALL/Open-Arabic-LLM-Leaderboard
and the release of atlasia/darija-chatbot-arena
All of this announcements was under 12 hours of time π€―

posted
an
update
6 months ago
Post
3338
Today is a big day for the Arabic Language,
We have Navid-AI/The-Arabic-Rag-Leaderboard,
an Update for OALL/Open-Arabic-LLM-Leaderboard
and the release of atlasia/darija-chatbot-arena
All of this announcements was under 12 hours of time π€―
We have Navid-AI/The-Arabic-Rag-Leaderboard,
an Update for OALL/Open-Arabic-LLM-Leaderboard
and the release of atlasia/darija-chatbot-arena
All of this announcements was under 12 hours of time π€―

reacted to
lewtun's
post with β€οΈ
6 months ago
Post
5417
Introducing OpenR1-Math-220k!
open-r1/OpenR1-Math-220k
The community has been busy distilling DeepSeek-R1 from inference providers, but we decided to have a go at doing it ourselves from scratch πͺ
Whatβs new compared to existing reasoning datasets?
βΎ Based on AI-MO/NuminaMath-1.5: we focus on math reasoning traces and generate answers for problems in NuminaMath 1.5, an improved version of the popular NuminaMath-CoT dataset.
π³ 800k R1 reasoning traces: We generate two answers for 400k problems using DeepSeek R1. The filtered dataset contains 220k problems with correct reasoning traces.
π 512 H100s running locally: Instead of relying on an API, we leverage vLLM and SGLang to run generations locally on our science cluster, generating 180k reasoning traces per day.
β³ Automated filtering: We apply Math Verify to only retain problems with at least one correct answer. We also leverage Llama3.3-70B-Instruct as a judge to retrieve more correct examples (e.g for cases with malformed answers that canβt be verified with a rules-based parser)
π We match the performance of DeepSeek-Distill-Qwen-7B by finetuning Qwen-7B-Math-Instruct on our dataset.
π Read our blog post for all the nitty gritty details: https://huggingface.co/blog/open-r1/update-2
open-r1/OpenR1-Math-220k
The community has been busy distilling DeepSeek-R1 from inference providers, but we decided to have a go at doing it ourselves from scratch πͺ
Whatβs new compared to existing reasoning datasets?
βΎ Based on AI-MO/NuminaMath-1.5: we focus on math reasoning traces and generate answers for problems in NuminaMath 1.5, an improved version of the popular NuminaMath-CoT dataset.
π³ 800k R1 reasoning traces: We generate two answers for 400k problems using DeepSeek R1. The filtered dataset contains 220k problems with correct reasoning traces.
π 512 H100s running locally: Instead of relying on an API, we leverage vLLM and SGLang to run generations locally on our science cluster, generating 180k reasoning traces per day.
β³ Automated filtering: We apply Math Verify to only retain problems with at least one correct answer. We also leverage Llama3.3-70B-Instruct as a judge to retrieve more correct examples (e.g for cases with malformed answers that canβt be verified with a rules-based parser)
π We match the performance of DeepSeek-Distill-Qwen-7B by finetuning Qwen-7B-Math-Instruct on our dataset.
π Read our blog post for all the nitty gritty details: https://huggingface.co/blog/open-r1/update-2
I am considering canceling my Pro subscription because I just discovered that i am just limited to 10 zeroGPU spaces i can host on my account. This number should be way higher.