Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,24 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This is the distilled educational Qwen2.5-7B-Instruct based on EduBench.
|
2 |
+
- [paper](https://arxiv.org/abs/2505.16160)
|
3 |
+
- [github](https://github.com/DIRECT-BIT/EduBench)
|
4 |
+
|
5 |
+
## Model Details
|
6 |
+
|
7 |
+
**Model Name**: EDU-Qwen2.5-7B
|
8 |
+
|
9 |
+
**Model Type**: Distilled instruction-tuned language model (7B parameters)
|
10 |
+
|
11 |
+
**Base Model**: [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
|
12 |
+
## Training Data
|
13 |
+
|
14 |
+
To fully leverage the strengths of different response generation models across various scenarios, we adopt a multi-source distillation pipeline.
|
15 |
+
For each task, we select the best-performing model on the test set as the response generator, using it to answer educational domain questions and construct the training dataset for the distillation model.
|
16 |
+
Through the distillation pipeline, we obtain a training set of 17,000 samples covering various subtasks across all 9 educational scenarios.
|
17 |
+
|
18 |
+
More details are provided in Appendix K of our [paper](https://arxiv.org/abs/2505.16160)
|
19 |
+
|
20 |
+
## Performance
|
21 |
+
<div align="center">
|
22 |
+
<img src="performance" alt="Framework" width="1200"/>
|
23 |
+
<br>
|
24 |
+
</div>
|