ReFoCUS: Reinforcement-guided Frame Optimization for Contextual Understanding
Abstract
ReFoCUS uses reinforcement learning to optimize frame selection for video-LLM, enhancing reasoning performance in video QA by aligning with model preferences.
Recent progress in Large Multi-modal Models (LMMs) has enabled effective vision-language reasoning, yet the ability to understand video content remains constrained by suboptimal frame selection strategies. Existing approaches often rely on static heuristics or external retrieval modules to feed frame information into video-LLMs, which may fail to provide the query-relevant information. In this work, we introduce ReFoCUS (Reinforcement-guided Frame Optimization for Contextual UnderStanding), a novel frame-level policy optimization framework that shifts the optimization target from textual responses to visual input selection. ReFoCUS learns a frame selection policy via reinforcement learning, using reward signals derived from a reference LMM to reflect the model's intrinsic preferences for frames that best support temporally grounded responses. To efficiently explore the large combinatorial frame space, we employ an autoregressive, conditional selection architecture that ensures temporal coherence while reducing complexity. Our approach does not require explicit supervision at the frame-level and consistently improves reasoning performance across multiple video QA benchmarks, highlighting the benefits of aligning frame selection with model-internal utility.
Community
Check out our latest frameworks:
- SALOVA: Efficient retrieval for grounding visual cues, https://arxiv.org/abs/2411.16173
- VideoMa2mba: State-space modeling for long-video understanding, https://arxiv.org/abs/2411.19460
- ReFoCUS: Reinforcement learning for frame selection, https://arxiv.org/abs/2506.01274
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ViaRL: Adaptive Temporal Grounding via Visual Iterated Amplification Reinforcement Learning (2025)
- Ground-R1: Incentivizing Grounded Visual Reasoning via Reinforcement Learning (2025)
- ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding (2025)
- VideoRFT: Incentivizing Video Reasoning Capability in MLLMs via Reinforced Fine-Tuning (2025)
- DeepEyes: Incentivizing"Thinking with Images"via Reinforcement Learning (2025)
- UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement Learning (2025)
- SVQA-R1: Reinforcing Spatial Reasoning in MLLMs via View-Consistent Reward Optimization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper