Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data
Abstract
A pipeline uses ChatGPT to generate a multi-turn chat corpus, which is then used to parameter-efficiently tune LLaMA, resulting in the Baize model with guardrails.
Chat models, such as ChatGPT, have shown impressive capabilities and have been rapidly adopted across numerous domains. However, these models are only accessible through a restricted API, creating barriers for new research and progress in the field. We propose a pipeline that can automatically generate a high-quality multi-turn chat corpus by leveraging ChatGPT to engage in a conversation with itself. Subsequently, we employ parameter-efficient tuning to enhance LLaMA, an open-source large language model. The resulting model, named Baize, demonstrates good performance in multi-turn dialogues with guardrails that minimize potential risks. The Baize models and data are released for research purposes only at https://github.com/project-baize/baize. An online demo is also available at https://huggingface.co/spaces/project-baize/baize-lora-7B.
Models citing this paper 20
Browse 20 models citing this paperDatasets citing this paper 4
Spaces citing this paper 136
Collections including this paper 0
No Collection including this paper