--- title: Advanced Dual Prompting emoji: 🏆 colorFrom: green colorTo: purple sdk: gradio sdk_version: 4.16.0 app_file: app.py pinned: false --- # Advanced Dual Prompting This is a playground for testing out Standford's 'Meta-Prompting' logic ([paper link](https://arxiv.org/abs/2401.12954)), in whcih for every user request, it first passes the request to a 'meta' bot, the `meta` bot will generate a system prompt of a field-related 'Expert' bot for answering user's request. That is, for each round, the LLM accordingly assigns the best expert for answering user's specific request. Standford claimed that this simple implementation result in a 60%+ better accuracy compared to a standard 'syst_prompt + chat_history' logic. Hence, one can't be too curious in checking it out, here is a simple implemnetaion for everybody to play around. Something to keep in mind: 1. Currently it requires an api key from chatglm (get one here if you don't have one: [link](https://open.bigmodel.cn/usercenter/apikeys)) 2. To balanced between contextual-understanding and token-saving, the meta bot's logic is modified to have access to only the last round of chat and the current user request when 'generating' an expert, for now.