RichardErkhov commited on
Commit
0e268c4
·
verified ·
1 Parent(s): e2cb53f

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ LLaMA3-iterative-DPO-final - bnb 8bits
11
+ - Model creator: https://huggingface.co/RLHFlow/
12
+ - Original model: https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: llama3
20
+ ---
21
+ # LLaMA3-iterative-DPO-final
22
+
23
+ * **Paper**: [RLHF Workflow: From Reward Modeling to Online RLHF](https://arxiv.org/pdf/2405.07863) (Published in TMLR, 2024)
24
+ * **Authors**: Hanze Dong*, Wei Xiong*, Bo Pang*, Haoxiang Wang*, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
25
+ * **Code**: https://github.com/RLHFlow/Online-RLHF
26
+
27
+ ## Introduction
28
+ We release an unofficial checkpoint of a state-of-the-art instruct model of its class, **LLaMA3-iterative-DPO-final**.
29
+ On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
30
+ and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
31
+
32
+ Even better, we provide a [detailed recipe](https://github.com/RLHFlow/Online-RLHF) to reproduce the model. Enjoy!
33
+
34
+ ## Model Releases
35
+ See the [collection](https://huggingface.co/collections/RLHFlow/online-rlhf-663ae95fade1a39663dab218) of the training set, reward/preference model, SFT model.
36
+
37
+ - [SFT model](https://huggingface.co/RLHFlow/LLaMA3-SFT)
38
+ - [Reward model](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1)
39
+ - This model is more like the concise version in the report. We are still working on the model realeasing due to some license issue....
40
+
41
+ ## Dataset
42
+ - [Preference data mix](https://huggingface.co/datasets/hendrydong/preference_700K)
43
+ - [Prompt collection for RLHF training](https://huggingface.co/datasets/RLHFlow/prompt-collection-v0.1)
44
+
45
+ ## Training methods
46
+ We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
47
+ Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
48
+ For a detailed exposition, please refer to our accompanying technical report.
49
+
50
+
51
+ ## Chat Benchmarks
52
+
53
+ | **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
54
+ |-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
55
+ | **Small Open-Sourced Models** | | | | | |
56
+ | Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
57
+ | Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
58
+ | Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
59
+ | Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
60
+ | Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
61
+ | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
62
+ | **Ours** | | | | | |
63
+ | Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
64
+ | Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
65
+ | Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
66
+ | **Large Open-Sourced Models** | | | | | |
67
+ | Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
68
+ | Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
69
+ | Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
70
+ | Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
71
+ | LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
72
+ | Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
73
+ | **Proprietary Models** | | | | | |
74
+ | GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
75
+ | GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
76
+ | GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
77
+ | Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
78
+ | GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
79
+
80
+
81
+ ## Academic Benchmarks
82
+
83
+ | **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
84
+ |----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
85
+ | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
86
+ | Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
87
+ | Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
88
+ | Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
89
+
90
+
91
+ ## Usage
92
+ ```python
93
+ from transformers import AutoModelForCausalLM, AutoTokenizer
94
+
95
+ device = "cuda"
96
+
97
+ model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
98
+ tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
99
+
100
+ messages = [
101
+ {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
102
+ ]
103
+
104
+ model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
105
+
106
+ model_inputs = model_inputs.to(device)
107
+ model.to(device)
108
+
109
+ output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
110
+ model_outputs = tokenizer.batch_decode(output_tokens)
111
+ print(model_outputs[0])
112
+ ```
113
+
114
+
115
+ ## Limitations
116
+ RLHFlow/LLaMA3-iterative-DPO-final is an unofficial checkpoint developed to illustrate the power of online iterative RLHF and is for research purpose. While safety and ethical considerations are integral to our alignment process,
117
+ there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
118
+ We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
119
+
120
+ ## Citation
121
+ Please cite our techical report if you find our model is useful for your research or product.
122
+ ```
123
+ @misc{dong2024rlhf,
124
+ title={RLHF Workflow: From Reward Modeling to Online RLHF},
125
+ author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
126
+ year={2024},
127
+ eprint={2405.07863},
128
+ archivePrefix={arXiv},
129
+ primaryClass={cs.LG}
130
+ }
131
+
132
+ @misc{xiong2024iterative,
133
+ title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
134
+ author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
135
+ year={2024},
136
+ eprint={2312.11456},
137
+ archivePrefix={arXiv},
138
+ primaryClass={cs.LG}
139
+ }
140
+
141
+ ```
142
+