Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
datasets:
|
4 |
-
- VisualSphinx/VisualSphinx-
|
5 |
language:
|
6 |
- en
|
7 |
base_model:
|
@@ -22,4 +22,4 @@ VisualSphinx is the largest fully-synthetic open-source dataset providing vision
|
|
22 |
|
23 |
## 📊 About This Model
|
24 |
|
25 |
-
This model is used for tagging the difficulty of our [VisualSphinx-V1](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-V1-Raw) synthetic dataset. To train this model, we perform GRPO on Qwen/Qwen2.5-VL-7B-Instruct using our [seed dataset](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
datasets:
|
4 |
+
- VisualSphinx/VisualSphinx-Seeds
|
5 |
language:
|
6 |
- en
|
7 |
base_model:
|
|
|
22 |
|
23 |
## 📊 About This Model
|
24 |
|
25 |
+
This model is used for tagging the difficulty of our [VisualSphinx-V1](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-V1-Raw) synthetic dataset. To train this model, we perform GRPO on Qwen/Qwen2.5-VL-7B-Instruct using our [seed dataset](https://huggingface.co/datasets/VisualSphinx/VisualSphinx-Seeds) for 256 steps.
|