CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues
Abstract
The CantTalkAboutThis dataset improves language models' ability to stay on topic during conversations by including distractor turns, making them more resilient and coherent compared to general-purpose instruction-tuned models.
Recent advancements in instruction-tuning datasets have predominantly focused on specific tasks like mathematical or logical reasoning. There has been a notable gap in data designed for aligning language models to maintain topic relevance in conversations - a critical aspect for deploying chatbots to production. We introduce the CantTalkAboutThis dataset to help language models remain focused on the subject at hand during task-oriented interactions. It consists of synthetic dialogues on a wide range of conversation topics from different domains. These dialogues are interspersed with distractor turns that intentionally divert the chatbot from the predefined topic. Fine-tuning language models on this dataset helps make them resilient to deviating from the role assigned and improves their ability to maintain topical coherence compared to general-purpose instruction-tuned LLMs like GPT-4-turbo and Mixtral-Instruct. Additionally, preliminary observations suggest that training models on this dataset also enhance their performance on fine-grained instruction following tasks.
Models citing this paper 1
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper