StreamBridge: Turning Your Offline Video Large Language Model into a Proactive Streaming Assistant
Abstract
StreamBridge is a framework that enhances offline Video-LLMs for streaming capabilities through a memory buffer with compression and a proactive response model, demonstrating superior performance on video understanding tasks.
We present StreamBridge, a simple yet effective framework that seamlessly transforms offline Video-LLMs into streaming-capable models. It addresses two fundamental challenges in adapting existing models into online scenarios: (1) limited capability for multi-turn real-time understanding, and (2) lack of proactive response mechanisms. Specifically, StreamBridge incorporates (1) a memory buffer combined with a round-decayed compression strategy, supporting long-context multi-turn interactions, and (2) a decoupled, lightweight activation model that can be effortlessly integrated into existing Video-LLMs, enabling continuous proactive responses. To further support StreamBridge, we construct Stream-IT, a large-scale dataset tailored for streaming video understanding, featuring interleaved video-text sequences and diverse instruction formats. Extensive experiments show that StreamBridge significantly improves the streaming understanding capabilities of offline Video-LLMs across various tasks, outperforming even proprietary models such as GPT-4o and Gemini 1.5 Pro. Simultaneously, it achieves competitive or superior performance on standard video understanding benchmarks.
Community
We present StreamBridge, a framework that transforms offline Video-LLMs into streaming models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TimeChat-Online: 80% Visual Tokens are Naturally Redundant in Streaming Videos (2025)
- LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale (2025)
- ViSpeak: Visual Instruction Feedback in Streaming Videos (2025)
- OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts (2025)
- Memory-efficient Streaming VideoLLMs for Real-time Procedural Video Understanding (2025)
- VideoExpert: Augmented LLM for Temporal-Sensitive Video Understanding (2025)
- Learning Streaming Video Representation via Multitask Training (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper