edbeeching HF Staff commited on
Commit
588f655
·
verified ·
1 Parent(s): b30f099

End of training

Browse files
Files changed (2) hide show
  1. README.md +3 -1
  2. config.json +1 -1
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
  base_model: open-r1/Qwen2.5-Math-7B-RoPE-300k
 
3
  library_name: transformers
4
  model_name: OpenR1-Distill-7B
5
  tags:
6
  - generated_from_trainer
 
7
  - trl
8
  - sft
9
  licence: license
@@ -11,7 +13,7 @@ licence: license
11
 
12
  # Model Card for OpenR1-Distill-7B
13
 
14
- This model is a fine-tuned version of [open-r1/Qwen2.5-Math-7B-RoPE-300k](https://huggingface.co/open-r1/Qwen2.5-Math-7B-RoPE-300k).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
1
  ---
2
  base_model: open-r1/Qwen2.5-Math-7B-RoPE-300k
3
+ datasets: open-r1/Mixture-of-Thoughts
4
  library_name: transformers
5
  model_name: OpenR1-Distill-7B
6
  tags:
7
  - generated_from_trainer
8
+ - open-r1
9
  - trl
10
  - sft
11
  licence: license
 
13
 
14
  # Model Card for OpenR1-Distill-7B
15
 
16
+ This model is a fine-tuned version of [open-r1/Qwen2.5-Math-7B-RoPE-300k](https://huggingface.co/open-r1/Qwen2.5-Math-7B-RoPE-300k) on the [open-r1/Mixture-of-Thoughts](https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts) dataset.
17
  It has been trained using [TRL](https://github.com/huggingface/trl).
18
 
19
  ## Quick start
config.json CHANGED
@@ -22,7 +22,7 @@
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.52.0.dev0",
25
- "use_cache": false,
26
  "use_sliding_window": false,
27
  "vocab_size": 152064
28
  }
 
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.52.0.dev0",
25
+ "use_cache": true,
26
  "use_sliding_window": false,
27
  "vocab_size": 152064
28
  }