sicer commited on
Commit
b1dea02
·
1 Parent(s): 101689c

Initial commit from existing repo

Browse files
Files changed (11) hide show
  1. CITATION.cff +44 -0
  2. LICENSE +201 -0
  3. MANIFEST.in +1 -0
  4. Makefile +14 -0
  5. README.md +742 -0
  6. README_zh.md +743 -0
  7. launching_script.sh +11 -0
  8. pyproject.toml +33 -0
  9. qwen2.log +578 -0
  10. requirements.txt +22 -0
  11. setup.py +104 -0
CITATION.cff ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cff-version: 1.2.0
2
+ date-released: 2024-03
3
+ message: "If you use this software, please cite it as below."
4
+ authors:
5
+ - family-names: "Zheng"
6
+ given-names: "Yaowei"
7
+ - family-names: "Zhang"
8
+ given-names: "Richong"
9
+ - family-names: "Zhang"
10
+ given-names: "Junhao"
11
+ - family-names: "Ye"
12
+ given-names: "Yanhan"
13
+ - family-names: "Luo"
14
+ given-names: "Zheyan"
15
+ - family-names: "Feng"
16
+ given-names: "Zhangchi"
17
+ - family-names: "Ma"
18
+ given-names: "Yongqiang"
19
+ title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
20
+ url: "https://arxiv.org/abs/2403.13372"
21
+ preferred-citation:
22
+ type: conference-paper
23
+ conference:
24
+ name: "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)"
25
+ authors:
26
+ - family-names: "Zheng"
27
+ given-names: "Yaowei"
28
+ - family-names: "Zhang"
29
+ given-names: "Richong"
30
+ - family-names: "Zhang"
31
+ given-names: "Junhao"
32
+ - family-names: "Ye"
33
+ given-names: "Yanhan"
34
+ - family-names: "Luo"
35
+ given-names: "Zheyan"
36
+ - family-names: "Feng"
37
+ given-names: "Zhangchi"
38
+ - family-names: "Ma"
39
+ given-names: "Yongqiang"
40
+ title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
41
+ url: "https://arxiv.org/abs/2403.13372"
42
+ year: 2024
43
+ publisher: "Association for Computational Linguistics"
44
+ address: "Bangkok, Thailand"
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
MANIFEST.in ADDED
@@ -0,0 +1 @@
 
 
1
+ include LICENSE requirements.txt
Makefile ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .PHONY: quality style test
2
+
3
+ check_dirs := scripts src tests setup.py
4
+
5
+ quality:
6
+ ruff check $(check_dirs)
7
+ ruff format --check $(check_dirs)
8
+
9
+ style:
10
+ ruff check $(check_dirs) --fix
11
+ ruff format $(check_dirs)
12
+
13
+ test:
14
+ CUDA_VISIBLE_DEVICES= pytest tests/
README.md ADDED
@@ -0,0 +1,742 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ![# LLaMA Factory](assets/logo.png)
2
+
3
+ [![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
4
+ [![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
5
+ [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
6
+ [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
7
+ [![Citation](https://img.shields.io/badge/citation-91-green)](#projects-using-llama-factory)
8
+ [![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)
9
+ [![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
10
+ [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
11
+ [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
12
+ [![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
13
+ [![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
14
+ [![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
15
+
16
+ [![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)
17
+
18
+ 👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg).
19
+
20
+ \[ English | [中文](README_zh.md) \]
21
+
22
+ **Fine-tuning a large language model can be easy as...**
23
+
24
+ https://github.com/user-attachments/assets/7c96b465-9df7-45f4-8053-bf03e58386d3
25
+
26
+ Choose your path:
27
+
28
+ - **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
29
+ - **PAI-DSW**: [Llama3 Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl)
30
+ - **Local machine**: Please refer to [usage](#getting-started)
31
+ - **Documentation (WIP)**: https://llamafactory.readthedocs.io/zh-cn/latest/
32
+
33
+ > [!NOTE]
34
+ > Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them.
35
+
36
+ ## Table of Contents
37
+
38
+ - [Features](#features)
39
+ - [Benchmark](#benchmark)
40
+ - [Changelog](#changelog)
41
+ - [Supported Models](#supported-models)
42
+ - [Supported Training Approaches](#supported-training-approaches)
43
+ - [Provided Datasets](#provided-datasets)
44
+ - [Requirement](#requirement)
45
+ - [Getting Started](#getting-started)
46
+ - [Projects using LLaMA Factory](#projects-using-llama-factory)
47
+ - [License](#license)
48
+ - [Citation](#citation)
49
+ - [Acknowledgement](#acknowledgement)
50
+
51
+ ## Features
52
+
53
+ - **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.
54
+ - **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc.
55
+ - **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ.
56
+ - **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [Adam-mini](https://github.com/zyushun/Adam-mini), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning.
57
+ - **Practical tricks**: [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), [Unsloth](https://github.com/unslothai/unsloth), [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA.
58
+ - **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc.
59
+ - **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.
60
+
61
+ ## Benchmark
62
+
63
+ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory.
64
+
65
+ ![benchmark](assets/benchmark.svg)
66
+
67
+ <details><summary>Definitions</summary>
68
+
69
+ - **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)
70
+ - **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
71
+ - **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024)
72
+ - We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA Factory's LoRA tuning.
73
+
74
+ </details>
75
+
76
+ ## Changelog
77
+
78
+ [24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage.
79
+
80
+ [24/09/19] We support fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models.
81
+
82
+ [24/08/30] We support fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR.
83
+
84
+ [24/08/27] We support **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training.
85
+
86
+ [24/08/09] We support **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR.
87
+
88
+ <details><summary>Full Changelog</summary>
89
+
90
+ [24/07/04] We support [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR.
91
+
92
+ [24/06/16] We support **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage.
93
+
94
+ [24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models.
95
+
96
+ [24/05/26] We supported **[SimPO](https://arxiv.org/abs/2405.14734)** algorithm for preference learning. See [examples](examples/README.md) for usage.
97
+
98
+ [24/05/20] We supported fine-tuning the **PaliGemma** series models. Note that the PaliGemma models are pre-trained models, you need to fine-tune them with `paligemma` template for chat completion.
99
+
100
+ [24/05/18] We supported **[KTO](https://arxiv.org/abs/2402.01306)** algorithm for preference learning. See [examples](examples/README.md) for usage.
101
+
102
+ [24/05/14] We supported training and inference on the Ascend NPU devices. Check [installation](#installation) section for details.
103
+
104
+ [24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See [examples](examples/README.md) for usage.
105
+
106
+ [24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details.
107
+
108
+ [24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See [examples](examples/README.md) for usage.
109
+
110
+ [24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)** optimizer. See [examples](examples/README.md) for usage.
111
+
112
+ [24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison).
113
+
114
+ [24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See [examples](examples/README.md) for usage.
115
+
116
+ [24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv!
117
+
118
+ [24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See [examples](examples/README.md) for usage.
119
+
120
+ [24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See [examples](examples/README.md) for usage.
121
+
122
+ [24/03/07] We supported **[GaLore](https://arxiv.org/abs/2403.03507)** optimizer. See [examples](examples/README.md) for usage.
123
+
124
+ [24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `infer_backend: vllm` to enjoy **270%** inference speed.
125
+
126
+ [24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `use_dora: true` to activate DoRA training.
127
+
128
+ [24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See [examples](examples/README.md) for usage.
129
+
130
+ [24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details.
131
+
132
+ [24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `dataset: glaive_toolcall_en`.
133
+
134
+ [23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `use_unsloth: true` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.
135
+
136
+ [23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).
137
+
138
+ [23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)**. See [this tutorial](#download-from-modelscope-hub) for usage.
139
+
140
+ [23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune.
141
+
142
+ [23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `shift_attn: true` argument to enable shift short attention.
143
+
144
+ [23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [examples](examples/README.md) for usage.
145
+
146
+ [23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `flash_attn: fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.
147
+
148
+ [23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `rope_scaling: linear` argument in training and `rope_scaling: dynamic` argument at inference to extrapolate the position embeddings.
149
+
150
+ [23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [examples](examples/README.md) for usage.
151
+
152
+ [23/07/31] We supported **dataset streaming**. Try `streaming: true` and `max_steps: 10000` arguments to load your dataset in streaming mode.
153
+
154
+ [23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details.
155
+
156
+ [23/07/18] We developed an **all-in-one Web UI** for training, evaluation and inference. Try `train_web.py` to fine-tune models in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development.
157
+
158
+ [23/07/09] We released **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested.
159
+
160
+ [23/06/29] We provided a **reproducible example** of training a chat model using instruction-following datasets, see [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft) for details.
161
+
162
+ [23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.
163
+
164
+ [23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). See [examples](examples/README.md) for usage.
165
+
166
+ </details>
167
+
168
+ ## Supported Models
169
+
170
+ | Model | Model size | Template |
171
+ | ----------------------------------------------------------------- | -------------------------------- | ---------------- |
172
+ | [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
173
+ | [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
174
+ | [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
175
+ | [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
176
+ | [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
177
+ | [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
178
+ | [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
179
+ | [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |
180
+ | [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 |
181
+ | [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
182
+ | [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
183
+ | [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
184
+ | [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
185
+ | [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
186
+ | [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
187
+ | [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 |
188
+ | [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
189
+ | [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
190
+ | [PaliGemma](https://huggingface.co/google) | 3B | paligemma |
191
+ | [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
192
+ | [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi |
193
+ | [Qwen (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
194
+ | [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl |
195
+ | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
196
+ | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
197
+ | [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
198
+ | [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
199
+ | [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |
200
+
201
+ > [!NOTE]
202
+ > For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models.
203
+ >
204
+ > Remember to use the **SAME** template in training and inference.
205
+
206
+ Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported.
207
+
208
+ You also can add a custom chat template to [template.py](src/llamafactory/data/template.py).
209
+
210
+ ## Supported Training Approaches
211
+
212
+ | Approach | Full-tuning | Freeze-tuning | LoRA | QLoRA |
213
+ | ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
214
+ | Pre-Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
215
+ | Supervised Fine-Tuning | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
216
+ | Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
217
+ | PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
218
+ | DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
219
+ | KTO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
220
+ | ORPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
221
+ | SimPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
222
+
223
+ > [!TIP]
224
+ > The implementation details of PPO can be found in [this blog](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html).
225
+
226
+ ## Provided Datasets
227
+
228
+ <details><summary>Pre-training datasets</summary>
229
+
230
+ - [Wiki Demo (en)](data/wiki_demo.txt)
231
+ - [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
232
+ - [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)
233
+ - [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)
234
+ - [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
235
+ - [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)
236
+ - [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)
237
+ - [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
238
+ - [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
239
+ - [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)
240
+ - [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
241
+
242
+ </details>
243
+
244
+ <details><summary>Supervised fine-tuning datasets</summary>
245
+
246
+ - [Identity (en&zh)](data/identity.json)
247
+ - [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
248
+ - [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
249
+ - [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
250
+ - [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
251
+ - [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
252
+ - [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
253
+ - [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
254
+ - [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
255
+ - [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
256
+ - [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)
257
+ - [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
258
+ - [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
259
+ - [UltraChat (en)](https://github.com/thunlp/UltraChat)
260
+ - [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
261
+ - [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
262
+ - [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
263
+ - [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca)
264
+ - [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca)
265
+ - [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
266
+ - [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
267
+ - [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa)
268
+ - [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)
269
+ - [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
270
+ - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
271
+ - [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
272
+ - [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
273
+ - [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
274
+ - [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
275
+ - [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
276
+ - [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
277
+ - [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
278
+ - [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
279
+ - [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
280
+ - [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
281
+ - [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
282
+ - [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
283
+ - [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
284
+ - [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
285
+ - [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
286
+ - [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
287
+ - [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
288
+ - [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
289
+ - [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
290
+ - [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
291
+ - [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de)
292
+ - [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de)
293
+ - [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de)
294
+ - [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de)
295
+ - [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de)
296
+ - [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de)
297
+
298
+ </details>
299
+
300
+ <details><summary>Preference datasets</summary>
301
+
302
+ - [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
303
+ - [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
304
+ - [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
305
+ - [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
306
+ - [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
307
+ - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
308
+ - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
309
+ - [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
310
+ - [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
311
+
312
+ </details>
313
+
314
+ Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
315
+
316
+ ```bash
317
+ pip install --upgrade huggingface_hub
318
+ huggingface-cli login
319
+ ```
320
+
321
+ ## Requirement
322
+
323
+ | Mandatory | Minimum | Recommend |
324
+ | ------------ | ------- | --------- |
325
+ | python | 3.8 | 3.11 |
326
+ | torch | 1.13.1 | 2.4.0 |
327
+ | transformers | 4.41.2 | 4.43.4 |
328
+ | datasets | 2.16.0 | 2.20.0 |
329
+ | accelerate | 0.30.1 | 0.32.0 |
330
+ | peft | 0.11.1 | 0.12.0 |
331
+ | trl | 0.8.6 | 0.9.6 |
332
+
333
+ | Optional | Minimum | Recommend |
334
+ | ------------ | ------- | --------- |
335
+ | CUDA | 11.6 | 12.2 |
336
+ | deepspeed | 0.10.0 | 0.14.0 |
337
+ | bitsandbytes | 0.39.0 | 0.43.1 |
338
+ | vllm | 0.4.3 | 0.5.0 |
339
+ | flash-attn | 2.3.0 | 2.6.3 |
340
+
341
+ ### Hardware Requirement
342
+
343
+ \* *estimated*
344
+
345
+ | Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |
346
+ | ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |
347
+ | Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |
348
+ | Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |
349
+ | Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |
350
+ | LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |
351
+ | QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |
352
+ | QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |
353
+ | QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
354
+
355
+ ## Getting Started
356
+
357
+ ### Installation
358
+
359
+ > [!IMPORTANT]
360
+ > Installation is mandatory.
361
+
362
+ ```bash
363
+ git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
364
+ cd LLaMA-Factory
365
+ pip install -e ".[torch,metrics]"
366
+ ```
367
+
368
+ Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, adam-mini, qwen, modelscope, openmind, quality
369
+
370
+ > [!TIP]
371
+ > Use `pip install --no-deps -e .` to resolve package conflicts.
372
+
373
+ <details><summary>For Windows users</summary>
374
+
375
+ If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
376
+
377
+ ```bash
378
+ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
379
+ ```
380
+
381
+ To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements.
382
+
383
+ </details>
384
+
385
+ <details><summary>For Ascend NPU users</summary>
386
+
387
+ To install LLaMA Factory on Ascend NPU devices, please specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands:
388
+
389
+ ```bash
390
+ # replace the url according to your CANN version and devices
391
+ # install CANN Toolkit
392
+ wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run
393
+ bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install
394
+
395
+ # install CANN Kernels
396
+ wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run
397
+ bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install
398
+
399
+ # set env variables
400
+ source /usr/local/Ascend/ascend-toolkit/set_env.sh
401
+ ```
402
+
403
+ | Requirement | Minimum | Recommend |
404
+ | ------------ | ------- | ----------- |
405
+ | CANN | 8.0.RC1 | 8.0.RC1 |
406
+ | torch | 2.1.0 | 2.1.0 |
407
+ | torch-npu | 2.1.0 | 2.1.0.post3 |
408
+ | deepspeed | 0.13.2 | 0.13.2 |
409
+
410
+ Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
411
+
412
+ If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
413
+
414
+ Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
415
+
416
+ </details>
417
+
418
+ ### Data Preparation
419
+
420
+ Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope / Modelers hub or load the dataset in local disk.
421
+
422
+ > [!NOTE]
423
+ > Please update `data/dataset_info.json` to use your custom dataset.
424
+
425
+ ### Quickstart
426
+
427
+ Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.
428
+
429
+ ```bash
430
+ llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
431
+ llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
432
+ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
433
+ ```
434
+
435
+ See [examples/README.md](examples/README.md) for advanced usage (including distributed training).
436
+
437
+ > [!TIP]
438
+ > Use `llamafactory-cli help` to show help information.
439
+
440
+ ### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
441
+
442
+ ```bash
443
+ llamafactory-cli webui
444
+ ```
445
+
446
+ ### Build Docker
447
+
448
+ For CUDA users:
449
+
450
+ ```bash
451
+ cd docker/docker-cuda/
452
+ docker compose up -d
453
+ docker compose exec llamafactory bash
454
+ ```
455
+
456
+ For Ascend NPU users:
457
+
458
+ ```bash
459
+ cd docker/docker-npu/
460
+ docker compose up -d
461
+ docker compose exec llamafactory bash
462
+ ```
463
+
464
+ For AMD ROCm users:
465
+
466
+ ```bash
467
+ cd docker/docker-rocm/
468
+ docker compose up -d
469
+ docker compose exec llamafactory bash
470
+ ```
471
+
472
+ <details><summary>Build without Docker Compose</summary>
473
+
474
+ For CUDA users:
475
+
476
+ ```bash
477
+ docker build -f ./docker/docker-cuda/Dockerfile \
478
+ --build-arg INSTALL_BNB=false \
479
+ --build-arg INSTALL_VLLM=false \
480
+ --build-arg INSTALL_DEEPSPEED=false \
481
+ --build-arg INSTALL_FLASHATTN=false \
482
+ --build-arg PIP_INDEX=https://pypi.org/simple \
483
+ -t llamafactory:latest .
484
+
485
+ docker run -dit --gpus=all \
486
+ -v ./hf_cache:/root/.cache/huggingface \
487
+ -v ./ms_cache:/root/.cache/modelscope \
488
+ -v ./om_cache:/root/.cache/openmind \
489
+ -v ./data:/app/data \
490
+ -v ./output:/app/output \
491
+ -p 7860:7860 \
492
+ -p 8000:8000 \
493
+ --shm-size 16G \
494
+ --name llamafactory \
495
+ llamafactory:latest
496
+
497
+ docker exec -it llamafactory bash
498
+ ```
499
+
500
+ For Ascend NPU users:
501
+
502
+ ```bash
503
+ # Choose docker image upon your environment
504
+ docker build -f ./docker/docker-npu/Dockerfile \
505
+ --build-arg INSTALL_DEEPSPEED=false \
506
+ --build-arg PIP_INDEX=https://pypi.org/simple \
507
+ -t llamafactory:latest .
508
+
509
+ # Change `device` upon your resources
510
+ docker run -dit \
511
+ -v ./hf_cache:/root/.cache/huggingface \
512
+ -v ./ms_cache:/root/.cache/modelscope \
513
+ -v ./om_cache:/root/.cache/openmind \
514
+ -v ./data:/app/data \
515
+ -v ./output:/app/output \
516
+ -v /usr/local/dcmi:/usr/local/dcmi \
517
+ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
518
+ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
519
+ -v /etc/ascend_install.info:/etc/ascend_install.info \
520
+ -p 7860:7860 \
521
+ -p 8000:8000 \
522
+ --device /dev/davinci0 \
523
+ --device /dev/davinci_manager \
524
+ --device /dev/devmm_svm \
525
+ --device /dev/hisi_hdc \
526
+ --shm-size 16G \
527
+ --name llamafactory \
528
+ llamafactory:latest
529
+
530
+ docker exec -it llamafactory bash
531
+ ```
532
+
533
+ For AMD ROCm users:
534
+
535
+ ```bash
536
+ docker build -f ./docker/docker-rocm/Dockerfile \
537
+ --build-arg INSTALL_BNB=false \
538
+ --build-arg INSTALL_VLLM=false \
539
+ --build-arg INSTALL_DEEPSPEED=false \
540
+ --build-arg INSTALL_FLASHATTN=false \
541
+ --build-arg PIP_INDEX=https://pypi.org/simple \
542
+ -t llamafactory:latest .
543
+
544
+ docker run -dit \
545
+ -v ./hf_cache:/root/.cache/huggingface \
546
+ -v ./ms_cache:/root/.cache/modelscope \
547
+ -v ./om_cache:/root/.cache/openmind \
548
+ -v ./data:/app/data \
549
+ -v ./output:/app/output \
550
+ -v ./saves:/app/saves \
551
+ -p 7860:7860 \
552
+ -p 8000:8000 \
553
+ --device /dev/kfd \
554
+ --device /dev/dri \
555
+ --shm-size 16G \
556
+ --name llamafactory \
557
+ llamafactory:latest
558
+
559
+ docker exec -it llamafactory bash
560
+ ```
561
+
562
+ </details>
563
+
564
+ <details><summary>Details about volume</summary>
565
+
566
+ - `hf_cache`: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory.
567
+ - `ms_cache`: Similar to Hugging Face cache but for ModelScope users.
568
+ - `om_cache`: Similar to Hugging Face cache but for Modelers users.
569
+ - `data`: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI.
570
+ - `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine.
571
+
572
+ </details>
573
+
574
+ ### Deploy with OpenAI-style API and vLLM
575
+
576
+ ```bash
577
+ API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
578
+ ```
579
+
580
+ > [!TIP]
581
+ > Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document.
582
+
583
+ ### Download from ModelScope Hub
584
+
585
+ If you have trouble with downloading models and datasets from Hugging Face, you can use ModelScope.
586
+
587
+ ```bash
588
+ export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows
589
+ ```
590
+
591
+ Train the model by specifying a model ID of the ModelScope Hub as the `model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`.
592
+
593
+ ### Download from Modelers Hub
594
+
595
+ You can also use Modelers Hub to download models and datasets.
596
+
597
+ ```bash
598
+ export USE_OPENMIND_HUB=1 # `set USE_OPENMIND_HUB=1` for Windows
599
+ ```
600
+
601
+ Train the model by specifying a model ID of the Modelers Hub as the `model_name_or_path`. You can find a full list of model IDs at [Modelers Hub](https://modelers.cn/models), e.g., `TeleAI/TeleChat-7B-pt`.
602
+
603
+ ### Use W&B Logger
604
+
605
+ To use [Weights & Biases](https://wandb.ai) for logging experimental results, you need to add the following arguments to yaml files.
606
+
607
+ ```yaml
608
+ report_to: wandb
609
+ run_name: test_run # optional
610
+ ```
611
+
612
+ Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account.
613
+
614
+ ## Projects using LLaMA Factory
615
+
616
+ If you have a project that should be incorporated, please contact via email or create a pull request.
617
+
618
+ <details><summary>Click to show</summary>
619
+
620
+ 1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
621
+ 1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
622
+ 1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
623
+ 1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)
624
+ 1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)
625
+ 1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
626
+ 1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
627
+ 1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)
628
+ 1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)
629
+ 1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)
630
+ 1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)
631
+ 1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)
632
+ 1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)
633
+ 1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)
634
+ 1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)
635
+ 1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)
636
+ 1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)
637
+ 1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
638
+ 1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
639
+ 1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
640
+ 1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
641
+ 1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
642
+ 1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
643
+ 1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
644
+ 1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
645
+ 1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
646
+ 1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
647
+ 1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
648
+ 1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
649
+ 1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
650
+ 1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
651
+ 1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
652
+ 1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
653
+ 1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
654
+ 1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
655
+ 1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)
656
+ 1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
657
+ 1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)
658
+ 1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)
659
+ 1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)
660
+ 1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)
661
+ 1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)
662
+ 1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)
663
+ 1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)
664
+ 1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)
665
+ 1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)
666
+ 1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)
667
+ 1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)
668
+ 1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)
669
+ 1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)
670
+ 1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)
671
+ 1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)
672
+ 1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)
673
+ 1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)
674
+ 1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)
675
+ 1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)
676
+ 1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)
677
+ 1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)
678
+ 1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)
679
+ 1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)
680
+ 1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)
681
+ 1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)
682
+ 1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949)
683
+ 1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365)
684
+ 1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470)
685
+ 1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129)
686
+ 1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044)
687
+ 1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756)
688
+ 1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/)
689
+ 1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561)
690
+ 1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637)
691
+ 1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535)
692
+ 1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705)
693
+ 1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137)
694
+ 1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf)
695
+ 1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11)
696
+ 1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23)
697
+ 1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693)
698
+ 1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)
699
+ 1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)
700
+ 1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)
701
+ 1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
702
+ 1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
703
+ 1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
704
+ 1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.
705
+ 1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.
706
+ 1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
707
+ 1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B.
708
+ 1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models.
709
+ 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX.
710
+ 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory.
711
+
712
+ </details>
713
+
714
+ ## License
715
+
716
+ This repository is licensed under the [Apache-2.0 License](LICENSE).
717
+
718
+ Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
719
+
720
+ ## Citation
721
+
722
+ If this work is helpful, please kindly cite as:
723
+
724
+ ```bibtex
725
+ @inproceedings{zheng2024llamafactory,
726
+ title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
727
+ author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
728
+ booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
729
+ address={Bangkok, Thailand},
730
+ publisher={Association for Computational Linguistics},
731
+ year={2024},
732
+ url={http://arxiv.org/abs/2403.13372}
733
+ }
734
+ ```
735
+
736
+ ## Acknowledgement
737
+
738
+ This repo benefits from [PEFT](https://github.com/huggingface/peft), [TRL](https://github.com/huggingface/trl), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works.
739
+
740
+ ## Star History
741
+
742
+ ![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date)
README_zh.md ADDED
@@ -0,0 +1,743 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ![# LLaMA Factory](assets/logo.png)
2
+
3
+ [![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
4
+ [![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
5
+ [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
6
+ [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
7
+ [![Citation](https://img.shields.io/badge/citation-91-green)](#使用了-llama-factory-的项目)
8
+ [![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)
9
+ [![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
10
+ [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
11
+ [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
12
+ [![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
13
+ [![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
14
+ [![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
15
+
16
+ [![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)
17
+
18
+ 👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。
19
+
20
+ \[ [English](README.md) | 中文 \]
21
+
22
+ **微调大模型可以像这样轻松…**
23
+
24
+ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
25
+
26
+ 选择你的打开方式:
27
+
28
+ - **Colab**:https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing
29
+ - **PAI-DSW**:[Llama3 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl)
30
+ - **本地机器**:请见[如何使用](#如何使用)
31
+ - **入门教程**:https://zhuanlan.zhihu.com/p/695287607
32
+ - **框架文档**:https://llamafactory.readthedocs.io/zh-cn/latest/
33
+
34
+ > [!NOTE]
35
+ > 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。
36
+
37
+ ## 目录
38
+
39
+ - [项目特色](#项目特色)
40
+ - [性能指标](#性能指标)
41
+ - [更新日志](#更新日志)
42
+ - [模型](#模型)
43
+ - [训练方法](#训练方法)
44
+ - [数据集](#数据集)
45
+ - [软硬件依赖](#软硬件依赖)
46
+ - [如何使用](#如何使用)
47
+ - [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目)
48
+ - [协议](#协议)
49
+ - [引用](#引用)
50
+ - [致谢](#致谢)
51
+
52
+ ## 项目特色
53
+
54
+ - **多种模型**:LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。
55
+ - **集成方法**:(增量)预训练、(多模态)指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。
56
+ - **多种精度**:16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。
57
+ - **先进算法**:[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[Adam-mini](https://github.com/zyushun/Adam-mini)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ、PiSSA 和 Agent 微调。
58
+ - **实用技巧**:[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。
59
+ - **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow 等等。
60
+ - **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。
61
+
62
+ ## 性能指标
63
+
64
+ 与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比,LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。
65
+
66
+ ![benchmark](assets/benchmark.svg)
67
+
68
+ <details><summary>变量定义</summary>
69
+
70
+ - **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4,截断长度=1024)
71
+ - **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4,截断长度=1024)
72
+ - **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1,截断长度=1024)
73
+ - 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA Factory 的 LoRA 微调中采用 `lora_rank=32`。
74
+
75
+ </details>
76
+
77
+ ## 更新日志
78
+
79
+ [24/10/09] 我们支持了从 **[魔乐社��](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。
80
+
81
+ [24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。
82
+
83
+ [24/08/30] 我们支持了 **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** 模型的微调。感谢 [@simonJJJ](https://github.com/simonJJJ) 的 PR。
84
+
85
+ [24/08/27] 我们支持了 **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**。请使用 `enable_liger_kernel: true` 来加速训练。
86
+
87
+ [24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。
88
+
89
+ <details><summary>展开日志</summary>
90
+
91
+ [24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。
92
+
93
+ [24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。
94
+
95
+ [24/06/07] 我们支持了 **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** 和 **[GLM-4](https://github.com/THUDM/GLM-4)** 模型的微调。
96
+
97
+ [24/05/26] 我们支持了 **[SimPO](https://arxiv.org/abs/2405.14734)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。
98
+
99
+ [24/05/20] 我们支持了 **PaliGemma** 系列模型的微调。注意 PaliGemma 是预训练模型,你需要使用 `paligemma` 模板进行微调使其获得对话能力。
100
+
101
+ [24/05/18] 我们支持了 **[KTO](https://arxiv.org/abs/2402.01306)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。
102
+
103
+ [24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅[安装](#安装-llama-factory)部分。
104
+
105
+ [24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 [examples](examples/README_zh.md)。
106
+
107
+ [24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)。
108
+
109
+ [24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 [examples](examples/README_zh.md)。
110
+
111
+ [24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。
112
+
113
+ [24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练(24GB 可训练 Llama-2-7B-56k)。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
114
+
115
+ [24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 [examples](examples/README_zh.md)。
116
+
117
+ [24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看!
118
+
119
+ [24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 [examples](examples/README_zh.md)。
120
+
121
+ [24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 [examples](examples/README_zh.md)。
122
+
123
+ [24/03/07] 我们支持了 **[GaLore](https://arxiv.org/abs/2403.03507)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。
124
+
125
+ [24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `infer_backend: vllm` 来获得 **270%** 的推理速度。
126
+
127
+ [24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `use_dora: true` 参数进行 DoRA 微调。
128
+
129
+ [24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 [examples](examples/README_zh.md)。
130
+
131
+ [24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。
132
+
133
+ [24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `dataset: glaive_toolcall_zh` 即可使模型获得工具调用能力。
134
+
135
+ [23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `use_unsloth: true` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
136
+
137
+ [23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。
138
+
139
+ [23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔搭社区下载)。
140
+
141
+ [23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `neftune_noise_alpha: 5` 参数启用 NEFTune。
142
+
143
+ [23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `shift_attn: true` 参数以启用该功能。
144
+
145
+ [23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 [examples](examples/README_zh.md)。
146
+
147
+ [23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 `flash_attn: fa2` 参数以启用 FlashAttention-2。
148
+
149
+ [23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `rope_scaling: linear` 参数训练模型或使用 `rope_scaling: dynamic` 参数评估模型。
150
+
151
+ [23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。详细用法请参照 [examples](examples/README_zh.md)。
152
+
153
+ [23/07/31] 我们支持了**数据流式加载**。请使用 `streaming: true` 和 `max_steps: 10000` 参数来流式加载数据集。
154
+
155
+ [23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。
156
+
157
+ [23/07/18] 我们开发了支持训练和测试的**浏览器一体化界面**。请使用 `train_web.py` 在您的浏览器中微调模型。感谢 [@KanadeSiina](https://github.com/KanadeSiina) 和 [@codemayq](https://github.com/codemayq) 在该功能开发中付出的努力。
158
+
159
+ [23/07/09] 我们开源了 **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹,一个简单易用的、能迅速编辑大模型事实记忆的工具包。如果您感兴趣请关注我们的 [FastEdit](https://github.com/hiyouga/FastEdit) 项目。
160
+
161
+ [23/06/29] 我们提供了一个**可复现的**指令模型微调示例,详细内容请查阅 [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft)。
162
+
163
+ [23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。
164
+
165
+ [23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。详细用法请参照 [examples](examples/README_zh.md)。
166
+
167
+ </details>
168
+
169
+ ## 模型
170
+
171
+ | 模型名 | 模型大小 | Template |
172
+ | ----------------------------------------------------------------- | -------------------------------- | ---------------- |
173
+ | [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
174
+ | [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
175
+ | [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
176
+ | [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
177
+ | [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
178
+ | [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
179
+ | [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
180
+ | [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |
181
+ | [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 |
182
+ | [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
183
+ | [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
184
+ | [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
185
+ | [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
186
+ | [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
187
+ | [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
188
+ | [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 |
189
+ | [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
190
+ | [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
191
+ | [PaliGemma](https://huggingface.co/google) | 3B | paligemma |
192
+ | [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
193
+ | [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi |
194
+ | [Qwen (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
195
+ | [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl |
196
+ | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
197
+ | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
198
+ | [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
199
+ | [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
200
+ | [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |
201
+
202
+ > [!NOTE]
203
+ > 对于所有“基座”(Base)模型,`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Instruct/Chat)模型请务必使用**对应的模板**。
204
+ >
205
+ > 请务必在训练和推理时采用**完全一致**的模板。
206
+
207
+ 项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。
208
+
209
+ 您也可以在 [template.py](src/llamafactory/data/template.py) 中添加自己的对话模板。
210
+
211
+ ## 训练方法
212
+
213
+ | 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA |
214
+ | ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
215
+ | 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
216
+ | 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
217
+ | 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
218
+ | PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
219
+ | DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
220
+ | KTO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
221
+ | ORPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
222
+ | SimPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
223
+
224
+ > [!TIP]
225
+ > 有关 PPO 的实现细节,请参考[此博客](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html)。
226
+
227
+ ## 数据集
228
+
229
+ <details><summary>预训练数据集</summary>
230
+
231
+ - [Wiki Demo (en)](data/wiki_demo.txt)
232
+ - [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
233
+ - [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)
234
+ - [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)
235
+ - [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
236
+ - [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)
237
+ - [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)
238
+ - [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
239
+ - [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
240
+ - [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)
241
+ - [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
242
+
243
+ </details>
244
+
245
+ <details><summary>指令微调数据集</summary>
246
+
247
+ - [Identity (en&zh)](data/identity.json)
248
+ - [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
249
+ - [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
250
+ - [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
251
+ - [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
252
+ - [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
253
+ - [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
254
+ - [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
255
+ - [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
256
+ - [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
257
+ - [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)
258
+ - [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
259
+ - [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
260
+ - [UltraChat (en)](https://github.com/thunlp/UltraChat)
261
+ - [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
262
+ - [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
263
+ - [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
264
+ - [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca)
265
+ - [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca)
266
+ - [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
267
+ - [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
268
+ - [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa)
269
+ - [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)
270
+ - [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
271
+ - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
272
+ - [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
273
+ - [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
274
+ - [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
275
+ - [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
276
+ - [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
277
+ - [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
278
+ - [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
279
+ - [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
280
+ - [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
281
+ - [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
282
+ - [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
283
+ - [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
284
+ - [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
285
+ - [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
286
+ - [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
287
+ - [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
288
+ - [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
289
+ - [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
290
+ - [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
291
+ - [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
292
+ - [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de)
293
+ - [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de)
294
+ - [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de)
295
+ - [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de)
296
+ - [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de)
297
+ - [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de)
298
+
299
+ </details>
300
+
301
+ <details><summary>偏好数据集</summary>
302
+
303
+ - [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
304
+ - [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
305
+ - [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
306
+ - [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
307
+ - [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
308
+ - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
309
+ - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
310
+ - [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
311
+ - [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
312
+
313
+ </details>
314
+
315
+ 部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。
316
+
317
+ ```bash
318
+ pip install --upgrade huggingface_hub
319
+ huggingface-cli login
320
+ ```
321
+
322
+ ## 软硬件依赖
323
+
324
+ | 必需项 | 至少 | 推荐 |
325
+ | ------------ | ------- | --------- |
326
+ | python | 3.8 | 3.11 |
327
+ | torch | 1.13.1 | 2.4.0 |
328
+ | transformers | 4.41.2 | 4.43.4 |
329
+ | datasets | 2.16.0 | 2.20.0 |
330
+ | accelerate | 0.30.1 | 0.32.0 |
331
+ | peft | 0.11.1 | 0.12.0 |
332
+ | trl | 0.8.6 | 0.9.6 |
333
+
334
+ | 可选项 | 至少 | 推荐 |
335
+ | ------------ | ------- | --------- |
336
+ | CUDA | 11.6 | 12.2 |
337
+ | deepspeed | 0.10.0 | 0.14.0 |
338
+ | bitsandbytes | 0.39.0 | 0.43.1 |
339
+ | vllm | 0.4.3 | 0.5.0 |
340
+ | flash-attn | 2.3.0 | 2.6.3 |
341
+
342
+ ### 硬件依赖
343
+
344
+ \* *估算值*
345
+
346
+ | 方法 | 精度 | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |
347
+ | ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |
348
+ | Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |
349
+ | Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |
350
+ | Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |
351
+ | LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |
352
+ | QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |
353
+ | QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |
354
+ | QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
355
+
356
+ ## 如何使用
357
+
358
+ ### 安装 LLaMA Factory
359
+
360
+ > [!IMPORTANT]
361
+ > 此步骤为必需。
362
+
363
+ ```bash
364
+ git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
365
+ cd LLaMA-Factory
366
+ pip install -e ".[torch,metrics]"
367
+ ```
368
+
369
+ 可选的额外依赖项:torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、awq、aqlm、vllm、galore、badam、adam-mini、qwen、modelscope、openmind、quality
370
+
371
+ > [!TIP]
372
+ > 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。
373
+
374
+ <details><summary>Windows 用户指南</summary>
375
+
376
+ 如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
377
+
378
+ ```bash
379
+ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
380
+ ```
381
+
382
+ 如果要在 Windows 平台上开启 FlashAttention-2,需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。
383
+
384
+ </details>
385
+
386
+ <details><summary>昇腾 NPU 用户指南</summary>
387
+
388
+ 在昇腾 NPU 设备上安装 LLaMA Factory 时,需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令:
389
+
390
+ ```bash
391
+ # 请替换 URL 为 CANN 版本和设备型号对应的 URL
392
+ # 安装 CANN Toolkit
393
+ wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run
394
+ bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install
395
+
396
+ # 安装 CANN Kernels
397
+ wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run
398
+ bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install
399
+
400
+ # 设置环境变量
401
+ source /usr/local/Ascend/ascend-toolkit/set_env.sh
402
+ ```
403
+
404
+ | 依赖项 | 至少 | 推荐 |
405
+ | ------------ | ------- | ----------- |
406
+ | CANN | 8.0.RC1 | 8.0.RC1 |
407
+ | torch | 2.1.0 | 2.1.0 |
408
+ | torch-npu | 2.1.0 | 2.1.0.post3 |
409
+ | deepspeed | 0.13.2 | 0.13.2 |
410
+
411
+ 请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。
412
+
413
+ 如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
414
+
415
+ 下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
416
+
417
+ </details>
418
+
419
+ ### 数据准备
420
+
421
+ 关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope / Modelers 上的数据集或加载本地数据集。
422
+
423
+ > [!NOTE]
424
+ > 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
425
+
426
+ ### 快速开始
427
+
428
+ 下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。
429
+
430
+ ```bash
431
+ llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
432
+ llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
433
+ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
434
+ ```
435
+
436
+ 高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。
437
+
438
+ > [!TIP]
439
+ > 使用 `llamafactory-cli help` 显示帮助信息。
440
+
441
+ ### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
442
+
443
+ ```bash
444
+ llamafactory-cli webui
445
+ ```
446
+
447
+ ### 构建 Docker
448
+
449
+ CUDA 用户:
450
+
451
+ ```bash
452
+ cd docker/docker-cuda/
453
+ docker compose up -d
454
+ docker compose exec llamafactory bash
455
+ ```
456
+
457
+ 昇腾 NPU 用户:
458
+
459
+ ```bash
460
+ cd docker/docker-npu/
461
+ docker compose up -d
462
+ docker compose exec llamafactory bash
463
+ ```
464
+
465
+ AMD ROCm 用户:
466
+
467
+ ```bash
468
+ cd docker/docker-rocm/
469
+ docker compose up -d
470
+ docker compose exec llamafactory bash
471
+ ```
472
+
473
+ <details><summary>不使用 Docker Compose 构建</summary>
474
+
475
+ CUDA 用户:
476
+
477
+ ```bash
478
+ docker build -f ./docker/docker-cuda/Dockerfile \
479
+ --build-arg INSTALL_BNB=false \
480
+ --build-arg INSTALL_VLLM=false \
481
+ --build-arg INSTALL_DEEPSPEED=false \
482
+ --build-arg INSTALL_FLASHATTN=false \
483
+ --build-arg PIP_INDEX=https://pypi.org/simple \
484
+ -t llamafactory:latest .
485
+
486
+ docker run -dit --gpus=all \
487
+ -v ./hf_cache:/root/.cache/huggingface \
488
+ -v ./ms_cache:/root/.cache/modelscope \
489
+ -v ./om_cache:/root/.cache/openmind \
490
+ -v ./data:/app/data \
491
+ -v ./output:/app/output \
492
+ -p 7860:7860 \
493
+ -p 8000:8000 \
494
+ --shm-size 16G \
495
+ --name llamafactory \
496
+ llamafactory:latest
497
+
498
+ docker exec -it llamafactory bash
499
+ ```
500
+
501
+ 昇腾 NPU 用户:
502
+
503
+ ```bash
504
+ # 根据您的环境选择镜像
505
+ docker build -f ./docker/docker-npu/Dockerfile \
506
+ --build-arg INSTALL_DEEPSPEED=false \
507
+ --build-arg PIP_INDEX=https://pypi.org/simple \
508
+ -t llamafactory:latest .
509
+
510
+ # 根据您的资源更改 `device`
511
+ docker run -dit \
512
+ -v ./hf_cache:/root/.cache/huggingface \
513
+ -v ./ms_cache:/root/.cache/modelscope \
514
+ -v ./om_cache:/root/.cache/openmind \
515
+ -v ./data:/app/data \
516
+ -v ./output:/app/output \
517
+ -v /usr/local/dcmi:/usr/local/dcmi \
518
+ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
519
+ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
520
+ -v /etc/ascend_install.info:/etc/ascend_install.info \
521
+ -p 7860:7860 \
522
+ -p 8000:8000 \
523
+ --device /dev/davinci0 \
524
+ --device /dev/davinci_manager \
525
+ --device /dev/devmm_svm \
526
+ --device /dev/hisi_hdc \
527
+ --shm-size 16G \
528
+ --name llamafactory \
529
+ llamafactory:latest
530
+
531
+ docker exec -it llamafactory bash
532
+ ```
533
+
534
+ AMD ROCm 用户:
535
+
536
+ ```bash
537
+ docker build -f ./docker/docker-rocm/Dockerfile \
538
+ --build-arg INSTALL_BNB=false \
539
+ --build-arg INSTALL_VLLM=false \
540
+ --build-arg INSTALL_DEEPSPEED=false \
541
+ --build-arg INSTALL_FLASHATTN=false \
542
+ --build-arg PIP_INDEX=https://pypi.org/simple \
543
+ -t llamafactory:latest .
544
+
545
+ docker run -dit \
546
+ -v ./hf_cache:/root/.cache/huggingface \
547
+ -v ./ms_cache:/root/.cache/modelscope \
548
+ -v ./om_cache:/root/.cache/openmind \
549
+ -v ./data:/app/data \
550
+ -v ./output:/app/output \
551
+ -v ./saves:/app/saves \
552
+ -p 7860:7860 \
553
+ -p 8000:8000 \
554
+ --device /dev/kfd \
555
+ --device /dev/dri \
556
+ --shm-size 16G \
557
+ --name llamafactory \
558
+ llamafactory:latest
559
+
560
+ docker exec -it llamafactory bash
561
+ ```
562
+
563
+ </details>
564
+
565
+ <details><summary>数据卷详情</summary>
566
+
567
+ - `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录。
568
+ - `ms_cache`:类似 Hugging Face 缓存文件夹,为 ModelScope 用户提供。
569
+ - `om_cache`:类似 Hugging Face 缓存文件夹,为 Modelers 用户提供。
570
+ - `data`:宿主机中存放数据集的文件夹路径。
571
+ - `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。
572
+
573
+ </details>
574
+
575
+ ### 利用 vLLM 部署 OpenAI API
576
+
577
+ ```bash
578
+ API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
579
+ ```
580
+
581
+ > [!TIP]
582
+ > API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。
583
+
584
+ ### 从魔搭社区下载
585
+
586
+ 如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。
587
+
588
+ ```bash
589
+ export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
590
+ ```
591
+
592
+ 将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`。
593
+
594
+ ### 从魔乐社区下载
595
+
596
+ 您也可以通过下述方法,使用魔乐社区下载数据集和模型。
597
+
598
+ ```bash
599
+ export USE_OPENMIND_HUB=1 # Windows 使用 `set USE_OPENMIND_HUB=1`
600
+ ```
601
+
602
+ 将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔乐社区](https://modelers.cn/models)查看所有可用的模型,例如 `TeleAI/TeleChat-7B-pt`。
603
+
604
+ ### 使用 W&B 面板
605
+
606
+ 若要使用 [Weights & Biases](https://wandb.ai) 记录实验数据,请在 yaml 文件中添加下面的参数。
607
+
608
+ ```yaml
609
+ report_to: wandb
610
+ run_name: test_run # 可选
611
+ ```
612
+
613
+ 在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。
614
+
615
+ ## 使用了 LLaMA Factory 的项目
616
+
617
+ 如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。
618
+
619
+ <details><summary>点击显示</summary>
620
+
621
+ 1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
622
+ 1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
623
+ 1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
624
+ 1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)
625
+ 1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)
626
+ 1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
627
+ 1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
628
+ 1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)
629
+ 1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)
630
+ 1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)
631
+ 1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)
632
+ 1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)
633
+ 1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)
634
+ 1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)
635
+ 1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)
636
+ 1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)
637
+ 1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)
638
+ 1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
639
+ 1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
640
+ 1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
641
+ 1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
642
+ 1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
643
+ 1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
644
+ 1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
645
+ 1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
646
+ 1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
647
+ 1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
648
+ 1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
649
+ 1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
650
+ 1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
651
+ 1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
652
+ 1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
653
+ 1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
654
+ 1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
655
+ 1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
656
+ 1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)
657
+ 1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
658
+ 1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)
659
+ 1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)
660
+ 1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)
661
+ 1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)
662
+ 1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)
663
+ 1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)
664
+ 1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)
665
+ 1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)
666
+ 1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)
667
+ 1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)
668
+ 1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)
669
+ 1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)
670
+ 1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)
671
+ 1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)
672
+ 1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)
673
+ 1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)
674
+ 1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)
675
+ 1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)
676
+ 1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)
677
+ 1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)
678
+ 1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)
679
+ 1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)
680
+ 1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)
681
+ 1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)
682
+ 1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)
683
+ 1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949)
684
+ 1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365)
685
+ 1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470)
686
+ 1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129)
687
+ 1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044)
688
+ 1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756)
689
+ 1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/)
690
+ 1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561)
691
+ 1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637)
692
+ 1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535)
693
+ 1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705)
694
+ 1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137)
695
+ 1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf)
696
+ 1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11)
697
+ 1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23)
698
+ 1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693)
699
+ 1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)
700
+ 1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)
701
+ 1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)
702
+ 1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。
703
+ 1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。
704
+ 1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
705
+ 1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。
706
+ 1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。
707
+ 1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
708
+ 1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。
709
+ 1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。
710
+ 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。
711
+ 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调.
712
+
713
+ </details>
714
+
715
+ ## 协议
716
+
717
+ 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。
718
+
719
+ 使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
720
+
721
+ ## 引用
722
+
723
+ 如果您觉得此项目有帮助,请考虑以下列格式引用
724
+
725
+ ```bibtex
726
+ @inproceedings{zheng2024llamafactory,
727
+ title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
728
+ author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
729
+ booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
730
+ address={Bangkok, Thailand},
731
+ publisher={Association for Computational Linguistics},
732
+ year={2024},
733
+ url={http://arxiv.org/abs/2403.13372}
734
+ }
735
+ ```
736
+
737
+ ## 致谢
738
+
739
+ 本项目受益于 [PEFT](https://github.com/huggingface/peft)、[TRL](https://github.com/huggingface/trl)、[QLoRA](https://github.com/artidoro/qlora) 和 [FastChat](https://github.com/lm-sys/FastChat),感谢以上诸位作者的付出。
740
+
741
+ ## Star History
742
+
743
+ ![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date)
launching_script.sh ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ export WANDB_API_KEY=c84b33041138e14a804e9b10d523a7bedac346a4
2
+ cd /mnt/data/guibin.chen/open-o1/LLaMA-Factory
3
+ source /mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/bin/activate
4
+
5
+ export WANDB_PROJECT="re_arc_v3_wlcb"
6
+ export WANDB_NAME="qwen_2_re_arc_test"
7
+ export FORCE_TORCHRUN=1
8
+ # export NNODES=$WORLD_SIZE
9
+ # export NPROC_PER_NODE=$KUBERNETES_CONTAINER_RESOURCE_GPU
10
+
11
+ llamafactory-cli train examples/train_full/qwen2_full_sft_ds3_local_v2_with_pt.yaml > qwen2.log 2>&1
pyproject.toml ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["setuptools>=61.0"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [tool.ruff]
6
+ target-version = "py38"
7
+ line-length = 119
8
+ indent-width = 4
9
+
10
+ [tool.ruff.lint]
11
+ ignore = ["C408", "C901", "E501", "E731", "E741", "W605"]
12
+ select = ["C", "E", "F", "I", "W"]
13
+
14
+ [tool.ruff.lint.isort]
15
+ lines-after-imports = 2
16
+ known-first-party = ["llamafactory"]
17
+ known-third-party = [
18
+ "accelerate",
19
+ "datasets",
20
+ "gradio",
21
+ "numpy",
22
+ "peft",
23
+ "torch",
24
+ "transformers",
25
+ "trl"
26
+ ]
27
+
28
+ [tool.ruff.format]
29
+ quote-style = "double"
30
+ indent-style = "space"
31
+ docstring-code-format = true
32
+ skip-magic-trailing-comma = false
33
+ line-ending = "auto"
qwen2.log ADDED
@@ -0,0 +1,578 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  0%| | 0/7044 [00:00<?, ?it/s][2024-10-20 11:24:15,553] [WARNING] [stage3.py:2104:step] 1 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time
 
1
  0%| | 1/7044 [00:16<32:21:12, 16.54s/it][rank0]: Traceback (most recent call last):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  0%| | 1/7044 [00:20<40:23:26, 20.65s/it]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2024-10-20 11:18:17,804] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2
+ 10/20/2024 11:19:54 - INFO - llamafactory.cli - Initializing distributed tasks at: 127.0.0.1:22328
3
+ W1020 11:20:22.856000 5294 torch/distributed/run.py:793]
4
+ W1020 11:20:22.856000 5294 torch/distributed/run.py:793] *****************************************
5
+ W1020 11:20:22.856000 5294 torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
6
+ W1020 11:20:22.856000 5294 torch/distributed/run.py:793] *****************************************
7
+ [2024-10-20 11:22:00,125] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
8
+ [2024-10-20 11:22:00,125] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ [2024-10-20 11:23:12,639] [INFO] [comm.py:652:init_distributed] cdb=None
10
+ [2024-10-20 11:23:12,639] [INFO] [comm.py:652:init_distributed] cdb=None
11
+ [2024-10-20 11:23:12,639] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
12
+ 10/20/2024 11:23:12 - INFO - llamafactory.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16
13
+ 10/20/2024 11:23:12 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16
14
+ [INFO|configuration_utils.py:673] 2024-10-20 11:23:12,820 >> loading configuration file /mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274/config.json
15
+ [INFO|configuration_utils.py:742] 2024-10-20 11:23:12,849 >> Model config Qwen2Config {
16
+ "_name_or_path": "/mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274",
17
+ "architectures": [
18
+ "Qwen2ForCausalLM"
19
+ ],
20
+ "attention_dropout": 0.0,
21
+ "bos_token_id": 151643,
22
+ "eos_token_id": 151643,
23
+ "hidden_act": "silu",
24
+ "hidden_size": 3584,
25
+ "initializer_range": 0.02,
26
+ "intermediate_size": 18944,
27
+ "max_position_embeddings": 32768,
28
+ "max_window_layers": 28,
29
+ "model_type": "qwen2",
30
+ "num_attention_heads": 28,
31
+ "num_hidden_layers": 28,
32
+ "num_key_value_heads": 4,
33
+ "rms_norm_eps": 1e-06,
34
+ "rope_scaling": null,
35
+ "rope_theta": 1000000.0,
36
+ "sliding_window": null,
37
+ "tie_word_embeddings": false,
38
+ "torch_dtype": "bfloat16",
39
+ "transformers_version": "4.45.2",
40
+ "use_cache": false,
41
+ "use_sliding_window": false,
42
+ "vocab_size": 152064
43
+ }
44
+
45
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:12,926 >> loading file vocab.json
46
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:12,926 >> loading file merges.txt
47
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:12,926 >> loading file tokenizer.json
48
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:12,926 >> loading file added_tokens.json
49
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:12,926 >> loading file special_tokens_map.json
50
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:12,926 >> loading file tokenizer_config.json
51
+ [INFO|tokenization_utils_base.py:2470] 2024-10-20 11:23:13,325 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
52
+ [INFO|configuration_utils.py:673] 2024-10-20 11:23:13,327 >> loading configuration file /mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274/config.json
53
+ [INFO|configuration_utils.py:742] 2024-10-20 11:23:13,328 >> Model config Qwen2Config {
54
+ "_name_or_path": "/mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274",
55
+ "architectures": [
56
+ "Qwen2ForCausalLM"
57
+ ],
58
+ "attention_dropout": 0.0,
59
+ "bos_token_id": 151643,
60
+ "eos_token_id": 151643,
61
+ "hidden_act": "silu",
62
+ "hidden_size": 3584,
63
+ "initializer_range": 0.02,
64
+ "intermediate_size": 18944,
65
+ "max_position_embeddings": 32768,
66
+ "max_window_layers": 28,
67
+ "model_type": "qwen2",
68
+ "num_attention_heads": 28,
69
+ "num_hidden_layers": 28,
70
+ "num_key_value_heads": 4,
71
+ "rms_norm_eps": 1e-06,
72
+ "rope_scaling": null,
73
+ "rope_theta": 1000000.0,
74
+ "sliding_window": null,
75
+ "tie_word_embeddings": false,
76
+ "torch_dtype": "bfloat16",
77
+ "transformers_version": "4.45.2",
78
+ "use_cache": false,
79
+ "use_sliding_window": false,
80
+ "vocab_size": 152064
81
+ }
82
+
83
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:13,340 >> loading file vocab.json
84
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:13,341 >> loading file merges.txt
85
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:13,341 >> loading file tokenizer.json
86
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:13,341 >> loading file added_tokens.json
87
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:13,341 >> loading file special_tokens_map.json
88
+ [INFO|tokenization_utils_base.py:2204] 2024-10-20 11:23:13,341 >> loading file tokenizer_config.json
89
+ 10/20/2024 11:23:13 - WARNING - llamafactory.model.loader - Processor was not found: 'Qwen2Config' object has no attribute 'vision_config'.
90
+ 10/20/2024 11:23:13 - INFO - llamafactory.data.template - Replace eos token: <|im_end|>
91
+ [rank1]:[W1020 11:23:13.167659749 ProcessGroupNCCL.cpp:4115] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device,or call init_process_group() with a device_id.
92
+ [INFO|tokenization_utils_base.py:2470] 2024-10-20 11:23:13,765 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
93
+ 10/20/2024 11:23:13 - WARNING - llamafactory.model.loader - Processor was not found: 'Qwen2Config' object has no attribute 'vision_config'.
94
+ 10/20/2024 11:23:13 - INFO - llamafactory.data.template - Replace eos token: <|im_end|>
95
+ 10/20/2024 11:23:13 - INFO - llamafactory.data.loader - Loading dataset re_arc_v3.json...
96
+
97
+
98
+ [rank0]:[W1020 11:23:17.104395895 ProcessGroupNCCL.cpp:4115] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device,or call init_process_group() with a device_id.
99
+ 10/20/2024 11:23:18 - INFO - llamafactory.data.loader - Loading dataset re_arc_v3.json...
100
+
101
+ training example:
102
+ input_ids:
103
+ [151644, 8948, 198, 2610, 525, 264, 10950, 17847, 429, 646, 11625, 32711, 9079, 553, 1667, 264, 7199, 738, 315, 45558, 5746, 429, 525, 11537, 304, 13027, 13, 715, 12210, 5430, 4008, 75241, 12, 8886, 3383, 17167, 315, 2163, 264, 22955, 315, 4862, 10295, 11, 1380, 458, 4862, 3110, 17167, 315, 458, 1946, 5827, 323, 458, 2550, 5827, 13, 715, 12, 1752, 1817, 4862, 3110, 11, 279, 2550, 5827, 374, 279, 1102, 315, 18950, 279, 1852, 3383, 18906, 17991, 311, 279, 1946, 5827, 13, 715, 12, 576, 5795, 374, 311, 23583, 279, 17991, 504, 279, 2421, 4862, 10295, 624, 12, 576, 17991, 374, 264, 3383, 18906, 5827, 17991, 11, 892, 646, 387, 28502, 3865, 1119, 264, 8500, 315, 279, 45558, 5746, 624, 12210, 45558, 4008, 75241, 12, 20768, 323, 16605, 198, 220, 481, 3070, 4173, 95518, 18614, 5257, 821, 4494, 1075, 1565, 3543, 7808, 1565, 1190, 7808, 1565, 31941, 7808, 323, 803, 311, 27596, 5827, 7525, 624, 220, 481, 3070, 9386, 95518, 29734, 1894, 18021, 320, 68, 1302, 2572, 1565, 73956, 7808, 1565, 5225, 63, 701, 2710, 18021, 28654, 51, 7808, 1565, 37, 63, 701, 323, 72845, 22879, 320, 68, 1302, 2572, 1565, 3124, 7808, 1565, 22554, 63, 4292, 12, 2340, 35209, 198, 220, 481, 3070, 8815, 24883, 95518, 23550, 1075, 1565, 718, 7808, 1565, 59442, 7808, 1565, 64648, 7808, 323, 1565, 59394, 63, 2736, 6770, 34784, 389, 25780, 476, 45225, 624, 220, 481, 3070, 64312, 24883, 95518, 23550, 1741, 438, 1565, 16788, 7808, 1565, 39017, 7808, 323, 1565, 21028, 63, 3705, 19819, 55081, 624, 220, 481, 3070, 1043, 24883, 95518, 23550, 1075, 1565, 16912, 7808, 1565, 1358, 7808, 1565, 19052, 7808, 1565, 59251, 7808, 323, 1565, 9789, 85477, 63, 10091, 821, 23853, 624, 12, 10587, 323, 3002, 60911, 2914, 198, 220, 481, 3070, 3543, 34286, 95518, 1565, 19943, 63, 11450, 56349, 448, 5189, 15336, 323, 2750, 624, 220, 481, 3070, 3543, 53652, 95518, 23550, 1075, 1565, 4640, 24, 15, 7808, 1565, 71, 73225, 7808, 1565, 8602, 2246, 7808, 323, 1565, 2923, 12445, 63, 5165, 56349, 304, 5257, 5510, 624, 220, 481, 3070, 3136, 4203, 24883, 95518, 1565, 34147, 7808, 1565, 4997, 2292, 7808, 1565, 85, 6960, 7808, 323, 1565, 10666, 63, 8649, 476, 5602, 5479, 315, 56349, 624, 220, 481, 3070, 1190, 323, 30412, 55713, 95518, 23550, 1075, 1565, 19210, 7808, 1565, 30590, 7808, 1565, 13418, 7808, 1565, 983, 14987, 7808, 323, 1565, 2758, 789, 63, 3705, 5827, 28660, 323, 6171, 624, 12, 18320, 323, 81531, 198, 220, 481, 3070, 1636, 18320, 95518, 23550, 1741, 438, 1565, 3562, 3423, 7808, 1565, 55271, 3423, 7808, 1565, 3423, 1830, 7808, 323, 1565, 59674, 63, 23643, 1894, 42685, 624, 220, 481, 3070, 1190, 81531, 95518, 1565, 3423, 5315, 63, 323, 1565, 2141, 5315, 63, 4051, 6171, 553, 1894, 476, 1379, 624, 220, 481, 3070, 90618, 18320, 95518, 23550, 1075, 1565, 3057, 7808, 1565, 3487, 7808, 1565, 1515, 25337, 7808, 323, 1565, 24739, 18181, 63, 23643, 27979, 11871, 624, 12, 96054, 323, 425, 13586, 198, 220, 481, 3070, 14611, 1927, 95518, 1565, 6459, 7808, 1565, 79488, 7808, 1565, 67, 79488, 7808, 323, 1565, 482, 24101, 63, 8253, 13234, 1948, 5827, 14937, 624, 220, 481, 3070, 37909, 95518, 23550, 1075, 1565, 2011, 7808, 1565, 84699, 7808, 1565, 411, 2011, 7808, 323, 1565, 38630, 388, 63, 10091, 30618, 5671, 315, 28660, 624, 12, 17954, 198, 220, 481, 3070, 13999, 4440, 23470, 95518, 1565, 359, 333, 396, 63, 26885, 4194, 25780, 2878, 5189, 14262, 323, 16829, 5866, 624, 220, 481, 3070, 3543, 18954, 95518, 1565, 285, 15604, 63, 12341, 421, 458, 1946, 374, 264, 2697, 5827, 624, 220, 481, 3070, 3543, 89588, 95518, 1565, 2243, 15604, 63, 56033, 11469, 311, 279, 5827, 943, 624, 12210, 15042, 315, 279, 7907, 2038, 75241, 12, 576, 1172, 5420, 7525, 525, 27572, 279, 1102, 315, 264, 729, 1618, 304, 264, 3890, 11, 1380, 678, 5977, 1969, 2987, 387, 279, 1946, 5827, 11, 1045, 18021, 1741, 438, 25780, 476, 4185, 22879, 18860, 17961, 11, 476, 264, 3890, 8597, 24182, 2878, 279, 1852, 28961, 11, 323, 1817, 729, 429, 374, 1660, 2598, 1969, 2987, 387, 264, 45558, 729, 476, 264, 3890, 8597, 20346, 2878, 279, 1852, 28961, 13, 715, 12, 1096, 1083, 3363, 429, 1817, 1555, 315, 2038, 374, 44321, 311, 387, 264, 3175, 729, 1618, 624, 4416, 11, 498, 525, 2661, 264, 3383, 323, 264, 738, 315, 10295, 11, 498, 1184, 311, 6923, 264, 2038, 429, 646, 11625, 279, 3383, 624, 151645, 198, 151644, 872, 198, 334, 13383, 220, 16, 3070, 715, 1946, 25, 320, 24, 553, 220, 23, 8, 11631, 715, 19, 91, 17, 91, 17, 91, 19, 91, 15, 91, 17, 91, 19, 91, 17, 198, 17, 91, 19, 91, 19, 91, 15, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 19, 91, 17, 91, 19, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 19, 91, 17, 91, 19, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 19, 91, 17, 198, 19, 91, 19, 91, 19, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 15, 91, 17, 91, 17, 91, 19, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 19, 91, 17, 91, 19, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 19, 91, 17, 198, 2550, 25, 320, 23, 553, 220, 24, 8, 11631, 715, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 19, 91, 19, 91, 19, 91, 17, 91, 19, 91, 19, 91, 17, 91, 17, 91, 19, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 19, 91, 17, 91, 17, 91, 17, 91, 19, 91, 19, 91, 17, 91, 15, 198, 17, 91, 17, 91, 15, 91, 17, 91, 17, 91, 17, 91, 17, 91, 15, 91, 19, 198, 17, 91, 17, 91, 17, 91, 19, 91, 17, 91, 17, 91, 19, 91, 19, 91, 17, 198, 17, 91, 17, 91, 17, 91, 19, 91, 17, 91, 17, 91, 17, 91, 19, 91, 17, 198, 17, 91, 17, 91, 17, 91, 19, 91, 17, 91, 17, 91, 17, 91, 17, 91, 19, 271, 334, 13383, 220, 17, 3070, 715, 1946, 25, 320, 16, 17, 553, 220, 16, 22, 8, 11631, 715, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 15, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 23, 91, 17, 91, 17, 91, 15, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 23, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 198, 15, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 15, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 23, 198, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 2550, 25, 320, 16, 22, 553, 220, 16, 17, 8, 11631, 715, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 15, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 15, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 23, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 198, 23, 91, 23, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 15, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 23, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 91, 17, 198, 17, 91, 17, 91, 17, 91, 17, 91, 23, 91, 17, 91, 15, 91, 17, 91, 23, 91, 17, 91, 17, 91, 17, 271, 334, 13383, 220, 18, 3070, 715, 1946, 25, 320, 23, 553, 220, 21, 8, 11631, 715, 22, 91, 16, 91, 22, 91, 16, 91, 22, 91, 22, 198, 22, 91, 22, 91, 22, 91, 16, 91, 16, 91, 22, 198, 22, 91, 22, 91, 22, 91, 16, 91, 22, 91, 22, 198, 22, 91, 22, 91, 22, 91, 24, 91, 22, 91, 22, 198, 16, 91, 16, 91, 22, 91, 16, 91, 22, 91, 16, 198, 22, 91, 22, 91, 22, 91, 22, 91, 22, 91, 22, 198, 16, 91, 16, 91, 22, 91, 24, 91, 22, 91, 24, 198, 22, 91, 16, 91, 16, 91, 22, 91, 22, 91, 22, 198, 2550, 25, 320, 21, 553, 220, 23, 8, 11631, 715, 22, 91, 24, 91, 22, 91, 16, 91, 22, 91, 22, 91, 22, 91, 22, 198, 22, 91, 22, 91, 22, 91, 22, 91, 22, 91, 22, 91, 16, 91, 22, 198, 22, 91, 24, 91, 22, 91, 16, 91, 24, 91, 16, 91, 16, 91, 16, 198, 16, 91, 22, 91, 22, 91, 22, 91, 22, 91, 22, 91, 22, 91, 22, 198, 16, 91, 16, 91, 22, 91, 16, 91, 22, 91, 22, 91, 22, 91, 16, 198, 22, 91, 16, 91, 22, 91, 16, 91, 22, 91, 22, 91, 22, 91, 22, 271, 151645, 198, 151644, 77091, 198, 750, 28961, 8972, 25, 10587, 8, 1464, 10587, 510, 262, 856, 15, 284, 5749, 24, 15, 8972, 340, 262, 856, 16, 284, 305, 73225, 2075, 15, 340, 262, 470, 856, 16, 151645]
104
+ inputs:
105
+ <|im_start|>system
106
+ You are a helpful assistant that can solve reasoning tasks by using a limited set of DSL functions that are implemented in Python.
107
+ *** Task description ***
108
+ - Each task consists of around a handful of training examples, where an training example consists of an input grid and an output grid.
109
+ - For each training example, the output grid is the result of applying the same task-specific transformation to the input grid.
110
+ - The goal is to infer the transformation from the few training examples.
111
+ - The transformation is a task-specific grid transformation, which can be decomposed into a sequence of the DSL functions.
112
+ *** DSL description ***
113
+ - Types and Constants
114
+ - **Types**: Define various data types like `Grid`, `Object`, `Indices`, and more to facilitate grid operations.
115
+ - **Constants**: Include color constants (e.g., `ZERO`, `ONE`), boolean constants (`T`, `F`), and directional vectors (e.g., `UP`, `DOWN`).
116
+ - Primitives
117
+ - **Math Operations**: Functions like `add`, `subtract`, `multiply`, and `divide` perform basic arithmetic on integers or tuples.
118
+ - **Logical Operations**: Functions such as `even`, `flip`, and `both` handle logical evaluations.
119
+ - **Data Operations**: Functions like `identity`, `order`, `merge`, `difference`, and `dedupe` manage data containers.
120
+ - Grid and Object Manipulation
121
+ - **Grid Creation**: `canvas` creates grids with specified dimensions and values.
122
+ - **Grid Transformation**: Functions like `rot90`, `hmirror`, `upscale`, and `downscale` transform grids in various ways.
123
+ - **Subgrid Operations**: `crop`, `hsplit`, `vsplit`, and `trim` extract or modify parts of grids.
124
+ - **Object and Patch Handling**: Functions like `objects`, `normalize`, `shift`, `toindices`, and `recolor` handle grid patches and objects.
125
+ - Analysis and Filtering
126
+ - **Color Analysis**: Functions such as `mostcolor`, `leastcolor`, `colorcount`, and `palette` analyze color distributions.
127
+ - **Object Filtering**: `colorfilter` and `sizefilter` filter objects by color or size.
128
+ - **Spatial Analysis**: Functions like `center`, `position`, `manhattan`, and `adjacent` analyze spatial relationships.
129
+ - Connectivity and Bounding
130
+ - **Connectivity**: `connect`, `neighbors`, `dneighbors`, and `ineighbors` determine connections between grid indices.
131
+ - **Bounding**: Functions like `box`, `inbox`, `outbox`, and `corners` manage bounding areas of patches.
132
+ - Utils
133
+ - **Random Integer Generation**: `unifint` generates random integers within specified bounds and difficulty levels.
134
+ - **Grid Validation**: `is_grid` checks if an input is a valid grid.
135
+ - **Grid Formatting**: `format_grid` casts lists to the grid type.
136
+ *** Format of the generated code ***
137
+ - The only allowed operations are storing the result of a function call in a variable, where all arguments must either be the input grid, some constants such as integers or common vectors indicating directions, or a variable previously computed within the same solver, and each function that is being called must either be a DSL function or a variable previously constructed within the same solver.
138
+ - This also means that each line of code is enforced to be a single function call.
139
+ So, you are given a task and a set of examples, you need to generate a code that can solve the task.
140
+ <|im_end|>
141
+ <|im_start|>user
142
+ ** Example 1 **
143
+ input: (9 by 8) Matrix
144
+ 4|2|2|4|0|2|4|2
145
+ 2|4|4|0|2|2|2|2
146
+ 2|2|4|2|4|2|2|2
147
+ 2|2|2|2|4|2|4|2
148
+ 2|2|2|2|2|2|4|2
149
+ 4|4|4|2|2|2|2|2
150
+ 2|2|2|0|2|2|4|2
151
+ 2|2|2|2|4|2|4|2
152
+ 2|2|2|2|2|2|4|2
153
+ output: (8 by 9) Matrix
154
+ 2|2|2|2|2|2|2|2|2
155
+ 4|4|4|2|4|4|2|2|4
156
+ 2|2|2|2|2|2|2|2|2
157
+ 2|4|2|2|2|4|4|2|0
158
+ 2|2|0|2|2|2|2|0|4
159
+ 2|2|2|4|2|2|4|4|2
160
+ 2|2|2|4|2|2|2|4|2
161
+ 2|2|2|4|2|2|2|2|4
162
+
163
+ ** Example 2 **
164
+ input: (12 by 17) Matrix
165
+ 2|2|2|2|2|2|2|2|2|2|2|2|2|2|2|2|8
166
+ 2|2|2|2|2|0|2|2|2|2|8|2|8|2|2|0|2
167
+ 2|2|2|2|2|2|2|2|2|2|2|2|2|2|2|2|2
168
+ 8|2|2|2|2|2|2|2|2|2|2|2|2|2|2|2|2
169
+ 2|8|2|8|2|2|2|2|2|2|2|2|8|2|2|2|2
170
+ 0|2|2|2|2|8|2|2|2|2|2|2|2|2|0|2|2
171
+ 2|2|2|2|2|2|2|2|2|2|2|2|2|2|8|2|8
172
+ 8|2|2|2|2|8|2|2|2|2|2|2|2|8|2|2|2
173
+ 2|2|8|2|2|2|2|2|8|2|2|2|2|2|2|2|2
174
+ 2|2|8|2|2|2|2|2|2|8|2|2|2|2|2|2|2
175
+ 2|2|2|2|2|2|2|2|2|8|2|2|2|2|2|2|2
176
+ 2|2|2|2|2|2|2|8|2|8|2|2|2|2|2|2|2
177
+ output: (17 by 12) Matrix
178
+ 2|2|2|2|2|8|2|2|2|2|2|8
179
+ 2|2|2|2|2|2|2|2|2|2|0|2
180
+ 2|2|2|2|2|8|0|2|2|2|2|2
181
+ 2|2|2|2|8|2|2|2|2|2|2|2
182
+ 2|2|2|2|2|2|2|8|2|2|8|2
183
+ 2|2|2|2|2|2|2|2|2|2|2|2
184
+ 2|2|2|2|2|2|2|2|2|2|8|2
185
+ 8|8|8|2|2|2|2|2|2|2|2|2
186
+ 2|2|2|8|2|2|2|2|2|2|2|2
187
+ 8|2|2|2|2|2|2|2|2|2|2|2
188
+ 2|2|2|2|2|2|2|2|2|2|2|2
189
+ 2|2|2|2|8|2|8|2|2|2|0|2
190
+ 2|2|2|2|2|2|2|2|2|2|2|2
191
+ 2|2|2|2|2|2|2|8|2|2|2|2
192
+ 2|2|8|8|2|2|2|2|2|2|2|2
193
+ 2|2|2|2|2|2|2|8|2|2|2|2
194
+ 2|2|2|2|8|2|0|2|8|2|2|2
195
+
196
+ ** Example 3 **
197
+ input: (8 by 6) Matrix
198
+ 7|1|7|1|7|7
199
+ 7|7|7|1|1|7
200
+ 7|7|7|1|7|7
201
+ 7|7|7|9|7|7
202
+ 1|1|7|1|7|1
203
+ 7|7|7|7|7|7
204
+ 1|1|7|9|7|9
205
+ 7|1|1|7|7|7
206
+ output: (6 by 8) Matrix
207
+ 7|9|7|1|7|7|7|7
208
+ 7|7|7|7|7|7|1|7
209
+ 7|9|7|1|9|1|1|1
210
+ 1|7|7|7|7|7|7|7
211
+ 1|1|7|1|7|7|7|1
212
+ 7|1|7|1|7|7|7|7
213
+
214
+ <|im_end|>
215
+ <|im_start|>assistant
216
+ def solver(I: Grid) -> Grid:
217
+ x0 = rot90(I)
218
+ x1 = hmirror(x0)
219
+ return x1<|im_end|>
220
+ label_ids:
221
+ [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 750, 28961, 8972, 25, 10587, 8, 1464, 10587, 510, 262, 856, 15, 284, 5749, 24, 15, 8972, 340, 262, 856, 16, 284, 305, 73225, 2075, 15, 340, 262, 470, 856, 16, 151645]
222
+ labels:
223
+ def solver(I: Grid) -> Grid:
224
+ x0 = rot90(I)
225
+ x1 = hmirror(x0)
226
+ return x1<|im_end|>
227
+ [INFO|configuration_utils.py:673] 2024-10-20 11:23:35,214 >> loading configuration file /mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274/config.json
228
+ [INFO|configuration_utils.py:742] 2024-10-20 11:23:35,215 >> Model config Qwen2Config {
229
+ "_name_or_path": "/mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274",
230
+ "architectures": [
231
+ "Qwen2ForCausalLM"
232
+ ],
233
+ "attention_dropout": 0.0,
234
+ "bos_token_id": 151643,
235
+ "eos_token_id": 151643,
236
+ "hidden_act": "silu",
237
+ "hidden_size": 3584,
238
+ "initializer_range": 0.02,
239
+ "intermediate_size": 18944,
240
+ "max_position_embeddings": 32768,
241
+ "max_window_layers": 28,
242
+ "model_type": "qwen2",
243
+ "num_attention_heads": 28,
244
+ "num_hidden_layers": 28,
245
+ "num_key_value_heads": 4,
246
+ "rms_norm_eps": 1e-06,
247
+ "rope_scaling": null,
248
+ "rope_theta": 1000000.0,
249
+ "sliding_window": null,
250
+ "tie_word_embeddings": false,
251
+ "torch_dtype": "bfloat16",
252
+ "transformers_version": "4.45.2",
253
+ "use_cache": false,
254
+ "use_sliding_window": false,
255
+ "vocab_size": 152064
256
+ }
257
+
258
+ [INFO|modeling_utils.py:3729] 2024-10-20 11:23:41,683 >> loading weights file /mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274/model.safetensors.index.json
259
+ [INFO|modeling_utils.py:3874] 2024-10-20 11:23:41,685 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model
260
+ [2024-10-20 11:23:41,685] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 2
261
+ [2024-10-20 11:23:41,685] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 2
262
+ [INFO|configuration_utils.py:1099] 2024-10-20 11:23:41,692 >> Generate config GenerationConfig {
263
+ "bos_token_id": 151643,
264
+ "eos_token_id": 151643,
265
+ "use_cache": false
266
+ }
267
+
268
+ [2024-10-20 11:23:41,939] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 339, num_elems = 7.62B
269
+
270
+ 10/20/2024 11:23:52 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled.
271
+ 10/20/2024 11:23:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
272
+ 10/20/2024 11:23:52 - INFO - llamafactory.model.adapter - ZeRO3 / FSDP detected, remaining trainable params in float32.
273
+ 10/20/2024 11:23:52 - INFO - llamafactory.model.adapter - Fine-tuning method: Full
274
+ 10/20/2024 11:23:52 - INFO - llamafactory.model.loader - trainable params: 7,615,616,512 || all params: 7,615,616,512 || trainable%: 100.0000
275
+
276
+ [INFO|modeling_utils.py:4574] 2024-10-20 11:23:52,977 >> All model checkpoint weights were used when initializing Qwen2ForCausalLM.
277
+
278
+ [INFO|modeling_utils.py:4582] 2024-10-20 11:23:52,977 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274.
279
+ If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training.
280
+ [INFO|configuration_utils.py:1052] 2024-10-20 11:23:53,013 >> loading configuration file /mnt/data/zifeng.cao/reasoning/arc-agi/LLaMA-Factory/saves/Qwen2.5-Coder-7B-Instruct/pt_output_plus_step_output/checkpoint-274/generation_config.json
281
+ [INFO|configuration_utils.py:1099] 2024-10-20 11:23:53,013 >> Generate config GenerationConfig {
282
+ "bos_token_id": 151643,
283
+ "do_sample": true,
284
+ "eos_token_id": [
285
+ 151645,
286
+ 151643
287
+ ],
288
+ "pad_token_id": 151643,
289
+ "repetition_penalty": 1.1,
290
+ "temperature": 0.7,
291
+ "top_k": 20,
292
+ "top_p": 0.8
293
+ }
294
+
295
+ 10/20/2024 11:23:53 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled.
296
+ 10/20/2024 11:23:53 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference.
297
+ 10/20/2024 11:23:53 - INFO - llamafactory.model.adapter - ZeRO3 / FSDP detected, remaining trainable params in float32.
298
+ 10/20/2024 11:23:53 - INFO - llamafactory.model.adapter - Fine-tuning method: Full
299
+ 10/20/2024 11:23:53 - INFO - llamafactory.model.loader - trainable params: 7,615,616,512 || all params: 7,615,616,512 || trainable%: 100.0000
300
+ Detected kernel version 4.19.91, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
301
+ [INFO|trainer.py:667] 2024-10-20 11:23:53,139 >> Using auto half precision backend
302
+ [2024-10-20 11:23:53,339] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.15.2, git-hash=unknown, git-branch=unknown
303
+ [2024-10-20 11:23:53,340] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 2
304
+ [2024-10-20 11:23:53,349] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
305
+ [2024-10-20 11:23:53,350] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
306
+ [2024-10-20 11:23:53,350] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
307
+ [2024-10-20 11:23:53,363] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW
308
+ [2024-10-20 11:23:53,363] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=AdamW type=<class 'torch.optim.adamw.AdamW'>
309
+ [2024-10-20 11:23:53,363] [INFO] [logging.py:96:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer, MiCS is enabled False, Hierarchical params gather False
310
+ [2024-10-20 11:23:53,363] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 3 optimizer
311
+ [2024-10-20 11:23:53,537] [INFO] [utils.py:781:see_memory_usage] Stage 3 initialize beginning
312
+ [2024-10-20 11:23:53,538] [INFO] [utils.py:782:see_memory_usage] MA 7.11 GB Max_MA 9.65 GB CA 8.89 GB Max_CA 10 GB
313
+ [2024-10-20 11:23:53,538] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
314
+ [2024-10-20 11:23:53,540] [INFO] [stage3.py:165:__init__] Reduce bucket size 12845056
315
+ [2024-10-20 11:23:53,540] [INFO] [stage3.py:166:__init__] Prefetch bucket size 11560550
316
+ [2024-10-20 11:23:53,716] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
317
+ [2024-10-20 11:23:53,717] [INFO] [utils.py:782:see_memory_usage] MA 7.11 GB Max_MA 7.11 GB CA 8.89 GB Max_CA 9 GB
318
+ [2024-10-20 11:23:53,717] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
319
+ Parameter Offload: Total persistent parameters: 333312 in 141 params
320
+ [2024-10-20 11:23:53,913] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
321
+ [2024-10-20 11:23:53,914] [INFO] [utils.py:782:see_memory_usage] MA 7.11 GB Max_MA 7.11 GB CA 8.89 GB Max_CA 9 GB
322
+ [2024-10-20 11:23:53,914] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
323
+ [2024-10-20 11:23:54,093] [INFO] [utils.py:781:see_memory_usage] Before creating fp16 partitions
324
+ [2024-10-20 11:23:54,094] [INFO] [utils.py:782:see_memory_usage] MA 7.11 GB Max_MA 7.11 GB CA 8.89 GB Max_CA 9 GB
325
+ [2024-10-20 11:23:54,094] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
326
+ [2024-10-20 11:23:57,802] [INFO] [utils.py:781:see_memory_usage] After creating fp16 partitions: 5
327
+ [2024-10-20 11:23:57,803] [INFO] [utils.py:782:see_memory_usage] MA 7.09 GB Max_MA 7.11 GB CA 7.1 GB Max_CA 9 GB
328
+ [2024-10-20 11:23:57,803] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
329
+ [2024-10-20 11:23:57,979] [INFO] [utils.py:781:see_memory_usage] Before creating fp32 partitions
330
+ [2024-10-20 11:23:57,980] [INFO] [utils.py:782:see_memory_usage] MA 7.09 GB Max_MA 7.09 GB CA 7.1 GB Max_CA 7 GB
331
+ [2024-10-20 11:23:57,980] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
332
+ [2024-10-20 11:23:58,175] [INFO] [utils.py:781:see_memory_usage] After creating fp32 partitions
333
+ [2024-10-20 11:23:58,176] [INFO] [utils.py:782:see_memory_usage] MA 21.28 GB Max_MA 22.72 GB CA 23.19 GB Max_CA 23 GB
334
+ [2024-10-20 11:23:58,176] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
335
+ [2024-10-20 11:23:58,350] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states
336
+ [2024-10-20 11:23:58,351] [INFO] [utils.py:782:see_memory_usage] MA 21.28 GB Max_MA 21.28 GB CA 23.19 GB Max_CA 23 GB
337
+ [2024-10-20 11:23:58,351] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
338
+ [2024-10-20 11:23:58,527] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states
339
+ [2024-10-20 11:23:58,527] [INFO] [utils.py:782:see_memory_usage] MA 21.28 GB Max_MA 25.08 GB CA 26.99 GB Max_CA 27 GB
340
+ [2024-10-20 11:23:58,528] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
341
+ [2024-10-20 11:23:58,528] [INFO] [stage3.py:520:_setup_for_real_optimizer] optimizer state initialized
342
+ [2024-10-20 11:23:58,815] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer
343
+ [2024-10-20 11:23:58,816] [INFO] [utils.py:782:see_memory_usage] MA 28.4 GB Max_MA 30.43 GB CA 34.08 GB Max_CA 34 GB
344
+ [2024-10-20 11:23:58,816] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 3.75 GB, percent = 0.9%
345
+ [2024-10-20 11:23:58,816] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer_Stage3
346
+ [2024-10-20 11:23:58,816] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None
347
+ [2024-10-20 11:23:58,816] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
348
+ [2024-10-20 11:23:58,817] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999)]
349
+ [2024-10-20 11:23:58,818] [INFO] [config.py:999:print] DeepSpeedEngine configuration:
350
+ [2024-10-20 11:23:58,818] [INFO] [config.py:1003:print] activation_checkpointing_config {
351
+ "partition_activations": false,
352
+ "contiguous_memory_optimization": false,
353
+ "cpu_checkpointing": false,
354
+ "number_checkpoints": null,
355
+ "synchronize_checkpoint_boundary": false,
356
+ "profile": false
357
+ }
358
+ [2024-10-20 11:23:58,818] [INFO] [config.py:1003:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
359
+ [2024-10-20 11:23:58,818] [INFO] [config.py:1003:print] amp_enabled .................. False
360
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] amp_params ................... False
361
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] autotuning_config ............ {
362
+ "enabled": false,
363
+ "start_step": null,
364
+ "end_step": null,
365
+ "metric_path": null,
366
+ "arg_mappings": null,
367
+ "metric": "throughput",
368
+ "model_info": null,
369
+ "results_dir": "autotuning_results",
370
+ "exps_dir": "autotuning_exps",
371
+ "overwrite": true,
372
+ "fast": true,
373
+ "start_profile_step": 3,
374
+ "end_profile_step": 5,
375
+ "tuner_type": "gridsearch",
376
+ "tuner_early_stopping": 5,
377
+ "tuner_num_trials": 50,
378
+ "model_info_path": null,
379
+ "mp_size": 1,
380
+ "max_train_batch_size": null,
381
+ "min_train_batch_size": 1,
382
+ "max_train_micro_batch_size_per_gpu": 1.024000e+03,
383
+ "min_train_micro_batch_size_per_gpu": 1,
384
+ "num_tuning_micro_batch_sizes": 3
385
+ }
386
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] bfloat16_enabled ............. True
387
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] bfloat16_immediate_grad_update False
388
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] checkpoint_parallel_write_pipeline False
389
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] checkpoint_tag_validation_enabled True
390
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] checkpoint_tag_validation_fail False
391
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f9ec0617850>
392
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] communication_data_type ...... None
393
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
394
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] curriculum_enabled_legacy .... False
395
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] curriculum_params_legacy ..... False
396
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
397
+ [2024-10-20 11:23:58,819] [INFO] [config.py:1003:print] data_efficiency_enabled ...... False
398
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] dataloader_drop_last ......... False
399
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] disable_allgather ............ False
400
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] dump_state ................... False
401
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] dynamic_loss_scale_args ...... None
402
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_enabled ........... False
403
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_gas_boundary_resolution 1
404
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_layer_name ........ bert.encoder.layer
405
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_layer_num ......... 0
406
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_max_iter .......... 100
407
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_stability ......... 1e-06
408
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_tol ............... 0.01
409
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] eigenvalue_verbose ........... False
410
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] elasticity_enabled ........... False
411
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] flops_profiler_config ........ {
412
+ "enabled": false,
413
+ "recompute_fwd_factor": 0.0,
414
+ "profile_step": 1,
415
+ "module_depth": -1,
416
+ "top_modules": 1,
417
+ "detailed": true,
418
+ "output_file": null
419
+ }
420
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] fp16_auto_cast ............... None
421
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] fp16_enabled ................. False
422
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] fp16_master_weights_and_gradients False
423
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] global_rank .................. 0
424
+ [2024-10-20 11:23:58,820] [INFO] [config.py:1003:print] grad_accum_dtype ............. None
425
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] gradient_accumulation_steps .. 4
426
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] gradient_clipping ............ 1.0
427
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] gradient_predivide_factor .... 1.0
428
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] graph_harvesting ............. False
429
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
430
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] initial_dynamic_scale ........ 1
431
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] load_universal_checkpoint .... False
432
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] loss_scale ................... 1.0
433
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] memory_breakdown ............. False
434
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] mics_hierarchial_params_gather False
435
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] mics_shard_size .............. -1
436
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName')
437
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] nebula_config ................ {
438
+ "enabled": false,
439
+ "persistent_storage_path": null,
440
+ "persistent_time_interval": 100,
441
+ "num_of_version_in_retention": 2,
442
+ "enable_nebula_load": true,
443
+ "load_path": null
444
+ }
445
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] optimizer_legacy_fusion ...... False
446
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] optimizer_name ............... None
447
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] optimizer_params ............. None
448
+ [2024-10-20 11:23:58,821] [INFO] [config.py:1003:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
449
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] pld_enabled .................. False
450
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] pld_params ................... False
451
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] prescale_gradients ........... False
452
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] scheduler_name ............... None
453
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] scheduler_params ............. None
454
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] seq_parallel_communication_data_type torch.float32
455
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] sparse_attention ............. None
456
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] sparse_gradients_enabled ..... False
457
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] steps_per_print .............. inf
458
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] timers_config ................ enabled=True synchronized=True
459
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] train_batch_size ............. 8
460
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] train_micro_batch_size_per_gpu 1
461
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] use_data_before_expert_parallel_ False
462
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] use_node_local_storage ....... False
463
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] wall_clock_breakdown ......... False
464
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] weight_quantization_config ... None
465
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] world_size ................... 2
466
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] zero_allow_untested_optimizer True
467
+ [2024-10-20 11:23:58,822] [INFO] [config.py:1003:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=12845056 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=11560550 param_persistence_threshold=35840 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=True use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
468
+ [2024-10-20 11:23:58,823] [INFO] [config.py:1003:print] zero_enabled ................. True
469
+ [2024-10-20 11:23:58,823] [INFO] [config.py:1003:print] zero_force_ds_cpu_optimizer .. True
470
+ [2024-10-20 11:23:58,823] [INFO] [config.py:1003:print] zero_optimization_stage ...... 3
471
+ [2024-10-20 11:23:58,823] [INFO] [config.py:989:print_user_config] json = {
472
+ "train_batch_size": 8,
473
+ "train_micro_batch_size_per_gpu": 1,
474
+ "gradient_accumulation_steps": 4,
475
+ "gradient_clipping": 1.0,
476
+ "zero_allow_untested_optimizer": true,
477
+ "fp16": {
478
+ "enabled": false,
479
+ "loss_scale": 0,
480
+ "loss_scale_window": 1000,
481
+ "initial_scale_power": 16,
482
+ "hysteresis": 2,
483
+ "min_loss_scale": 1
484
+ },
485
+ "bf16": {
486
+ "enabled": true
487
+ },
488
+ "zero_optimization": {
489
+ "stage": 3,
490
+ "overlap_comm": true,
491
+ "contiguous_gradients": true,
492
+ "sub_group_size": 1.000000e+09,
493
+ "reduce_bucket_size": 1.284506e+07,
494
+ "stage3_prefetch_bucket_size": 1.156055e+07,
495
+ "stage3_param_persistence_threshold": 3.584000e+04,
496
+ "stage3_max_live_parameters": 1.000000e+09,
497
+ "stage3_max_reuse_distance": 1.000000e+09,
498
+ "stage3_gather_16bit_weights_on_model_save": true
499
+ },
500
+ "steps_per_print": inf
501
+ }
502
+ [INFO|trainer.py:2243] 2024-10-20 11:23:58,823 >> ***** Running training *****
503
+ [INFO|trainer.py:2244] 2024-10-20 11:23:58,823 >> Num examples = 14,094
504
+ [INFO|trainer.py:2245] 2024-10-20 11:23:58,823 >> Num Epochs = 4
505
+ [INFO|trainer.py:2246] 2024-10-20 11:23:58,823 >> Instantaneous batch size per device = 1
506
+ [INFO|trainer.py:2249] 2024-10-20 11:23:58,823 >> Total train batch size (w. parallel, distributed & accumulation) = 8
507
+ [INFO|trainer.py:2250] 2024-10-20 11:23:58,823 >> Gradient Accumulation steps = 4
508
+ [INFO|trainer.py:2251] 2024-10-20 11:23:58,823 >> Total optimization steps = 7,044
509
+ [INFO|trainer.py:2252] 2024-10-20 11:23:58,824 >> Number of trainable parameters = 7,615,616,512
510
+
511
  0%| | 0/7044 [00:00<?, ?it/s][2024-10-20 11:24:15,553] [WARNING] [stage3.py:2104:step] 1 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time
512
+
513
  0%| | 1/7044 [00:16<32:21:12, 16.54s/it][rank0]: Traceback (most recent call last):
514
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/src/llamafactory/launcher.py", line 23, in <module>
515
+ [rank0]: launch()
516
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch
517
+ [rank0]: run_exp()
518
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp
519
+ [rank0]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
520
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 96, in run_sft
521
+ [rank0]: train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
522
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/trainer.py", line 2052, in train
523
+ [rank0]: return inner_training_loop(
524
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/trainer.py", line 2388, in _inner_training_loop
525
+ [rank0]: tr_loss_step = self.training_step(model, inputs)
526
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/trainer.py", line 3518, in training_step
527
+ [rank0]: self.accelerator.backward(loss, **kwargs)
528
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/accelerate/accelerator.py", line 2188, in backward
529
+ [rank0]: self.deepspeed_engine_wrapped.backward(loss, **kwargs)
530
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 166, in backward
531
+ [rank0]: self.engine.backward(loss, **kwargs)
532
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn
533
+ [rank0]: ret_val = func(*args, **kwargs)
534
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2020, in backward
535
+ [rank0]: self.optimizer.backward(loss, retain_graph=retain_graph)
536
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn
537
+ [rank0]: ret_val = func(*args, **kwargs)
538
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py", line 2249, in backward
539
+ [rank0]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
540
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
541
+ [rank0]: scaled_loss.backward(retain_graph=retain_graph)
542
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/_tensor.py", line 581, in backward
543
+ [rank0]: torch.autograd.backward(
544
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
545
+ [rank0]: _engine_run_backward(
546
+ [rank0]: File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
547
+ [rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
548
+ [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.34 GiB. GPU 0 has a total capacity of 79.35 GiB of which 2.95 GiB is free. Process 22118 has 76.39 GiB memory in use. Of the allocated memory 67.73 GiB is allocated by PyTorch, and 7.82 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
549
+
550
  0%| | 1/7044 [00:20<40:23:26, 20.65s/it]
551
+ W1020 11:24:20.981000 5294 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 5344 closing signal SIGTERM
552
+ E1020 11:24:21.272000 5294 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 5343) of binary: /mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/bin/python
553
+ Traceback (most recent call last):
554
+ File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/bin/torchrun", line 8, in <module>
555
+ sys.exit(main())
556
+ File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
557
+ return f(*args, **kwargs)
558
+ File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/distributed/run.py", line 919, in main
559
+ run(args)
560
+ File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/distributed/run.py", line 910, in run
561
+ elastic_launch(
562
+ File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
563
+ return launch_agent(self._config, self._entrypoint, list(args))
564
+ File "/mnt/data/guibin.chen/open-o1/LLaMA-Factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
565
+ raise ChildFailedError(
566
+ torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
567
+ ============================================================
568
+ /mnt/data/guibin.chen/open-o1/LLaMA-Factory/src/llamafactory/launcher.py FAILED
569
+ ------------------------------------------------------------
570
+ Failures:
571
+ <NO_OTHER_FAILURES>
572
+ ------------------------------------------------------------
573
+ Root Cause (first observed failure):
574
+ [0]:
575
+ time : 2024-10-20_11:24:20
576
+ host : dsw-116518-6475cdf9d-vpqks
577
+ rank : 0 (local_rank: 0)
578
+ exitcode : 1 (pid: 5343)
579
+ error_file: <N/A>
580
+ traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
581
+ ============================================================
requirements.txt ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ transformers>=4.41.2,<=4.45.2
2
+ datasets>=2.16.0,<=2.21.0
3
+ accelerate>=0.30.1,<=0.34.2
4
+ peft>=0.11.1,<=0.12.0
5
+ trl>=0.8.6,<=0.9.6
6
+ gradio>=4.0.0,<5.0.0
7
+ pandas>=2.0.0
8
+ scipy
9
+ einops
10
+ sentencepiece
11
+ tiktoken
12
+ protobuf
13
+ uvicorn
14
+ pydantic
15
+ fastapi
16
+ sse-starlette
17
+ matplotlib>=3.7.0
18
+ fire
19
+ packaging
20
+ pyyaml
21
+ numpy<2.0.0
22
+ av
setup.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 the LlamaFactory team.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import os
16
+ import re
17
+ from typing import List
18
+
19
+ from setuptools import find_packages, setup
20
+
21
+
22
+ def get_version() -> str:
23
+ with open(os.path.join("src", "llamafactory", "extras", "env.py"), "r", encoding="utf-8") as f:
24
+ file_content = f.read()
25
+ pattern = r"{}\W*=\W*\"([^\"]+)\"".format("VERSION")
26
+ (version,) = re.findall(pattern, file_content)
27
+ return version
28
+
29
+
30
+ def get_requires() -> List[str]:
31
+ with open("requirements.txt", "r", encoding="utf-8") as f:
32
+ file_content = f.read()
33
+ lines = [line.strip() for line in file_content.strip().split("\n") if not line.startswith("#")]
34
+ return lines
35
+
36
+
37
+ def get_console_scripts() -> List[str]:
38
+ console_scripts = ["llamafactory-cli = llamafactory.cli:main"]
39
+ if os.environ.get("ENABLE_SHORT_CONSOLE", "1").lower() in ["true", "1"]:
40
+ console_scripts.append("lmf = llamafactory.cli:main")
41
+
42
+ return console_scripts
43
+
44
+
45
+ extra_require = {
46
+ "torch": ["torch>=1.13.1"],
47
+ "torch-npu": ["torch==2.1.0", "torch-npu==2.1.0.post3", "decorator"],
48
+ "metrics": ["nltk", "jieba", "rouge-chinese"],
49
+ "deepspeed": ["deepspeed>=0.10.0,<=0.14.4"],
50
+ "liger-kernel": ["liger-kernel"],
51
+ "bitsandbytes": ["bitsandbytes>=0.39.0"],
52
+ "hqq": ["hqq"],
53
+ "eetq": ["eetq"],
54
+ "gptq": ["optimum>=1.17.0", "auto-gptq>=0.5.0"],
55
+ "awq": ["autoawq"],
56
+ "aqlm": ["aqlm[gpu]>=1.1.0"],
57
+ "vllm": ["vllm>=0.4.3,<=0.6.3"],
58
+ "galore": ["galore-torch"],
59
+ "badam": ["badam>=1.2.1"],
60
+ "adam-mini": ["adam-mini"],
61
+ "qwen": ["transformers_stream_generator"],
62
+ "modelscope": ["modelscope"],
63
+ "openmind": ["openmind"],
64
+ "dev": ["ruff", "pytest"],
65
+ }
66
+
67
+
68
+ def main():
69
+ setup(
70
+ name="llamafactory",
71
+ version=get_version(),
72
+ author="hiyouga",
73
+ author_email="hiyouga" "@" "buaa.edu.cn",
74
+ description="Easy-to-use LLM fine-tuning framework",
75
+ long_description=open("README.md", "r", encoding="utf-8").read(),
76
+ long_description_content_type="text/markdown",
77
+ keywords=["LLaMA", "BLOOM", "Falcon", "LLM", "ChatGPT", "transformer", "pytorch", "deep learning"],
78
+ license="Apache 2.0 License",
79
+ url="https://github.com/hiyouga/LLaMA-Factory",
80
+ package_dir={"": "src"},
81
+ packages=find_packages("src"),
82
+ python_requires=">=3.8.0",
83
+ install_requires=get_requires(),
84
+ extras_require=extra_require,
85
+ entry_points={"console_scripts": get_console_scripts()},
86
+ classifiers=[
87
+ "Development Status :: 4 - Beta",
88
+ "Intended Audience :: Developers",
89
+ "Intended Audience :: Education",
90
+ "Intended Audience :: Science/Research",
91
+ "License :: OSI Approved :: Apache Software License",
92
+ "Operating System :: OS Independent",
93
+ "Programming Language :: Python :: 3",
94
+ "Programming Language :: Python :: 3.8",
95
+ "Programming Language :: Python :: 3.9",
96
+ "Programming Language :: Python :: 3.10",
97
+ "Programming Language :: Python :: 3.11",
98
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
99
+ ],
100
+ )
101
+
102
+
103
+ if __name__ == "__main__":
104
+ main()