Papers
arxiv:2505.20046

REARANK: Reasoning Re-ranking Agent via Reinforcement Learning

Published on May 26
· Submitted by le723z on May 27
Authors:
,
,
,

Abstract

REARANK, a reinforcement learning-enhanced large language model for listwise reasoning,outperforms baseline models and even surpasses GPT-4 on reasoning-intensive benchmarks with minimal data.

AI-generated summary

We present REARANK, a large language model (LLM)-based listwise reasoning reranking agent. REARANK explicitly reasons before reranking, significantly improving both performance and interpretability. Leveraging reinforcement learning and data augmentation, REARANK achieves substantial improvements over baseline models across popular information retrieval benchmarks, notably requiring only 179 annotated samples. Built on top of Qwen2.5-7B, our REARANK-7B demonstrates performance comparable to GPT-4 on both in-domain and out-of-domain benchmarks and even surpasses GPT-4 on reasoning-intensive BRIGHT benchmarks. These results underscore the effectiveness of our approach and highlight how reinforcement learning can enhance LLM reasoning capabilities in reranking.

Community

Paper author Paper submitter

We present Rearank, a large language model (LLM)-based listwise reasoning reranking agent. Rearank explicitly reasons before reranking, significantly improving both performance and interpretability. Leveraging reinforcement learning and data augmentation, Rearank achieves substantial improvements over baseline models across popular information retrieval benchmarks, notably requiring only 179 annotated samples. Built on top of Qwen2.5-7B, our Rearank-7B demonstrates performance comparable to GPT-4 on both in-domain and out-of-domain benchmarks and even surpasses GPT-4 on reasoning-intensive BRIGHT benchmarks. These results underscore the effectiveness of our approach and highlight how reinforcement learning can enhance LLM reasoning capabilities in reranking. The code is available https://github.com/lezhang7/Rearank.

Paper author Paper submitter

The model is released at: https://huggingface.co/le723z/Rearank-7B

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.20046 in a Space README.md to link it from this page.

Collections including this paper 1