Papers
arxiv:2501.15513

TinyLLaVA-Video: A Simple Framework of Small-scale Large Multimodal Models for Video Understanding

Published on Jan 26
Authors:
,
,
,
,

Abstract

TinyLLaVA-Video is a parameter-efficient video understanding model that supports flexible frame sampling and achieves performance similar to larger models while minimizing computational requirements.

AI-generated summary

We present the TinyLLaVA-Video, a video understanding model with parameters not exceeding 4B that processes video sequences in a simple manner, without the need for complex architectures, supporting both fps sampling and uniform frame sampling. Our model is characterized by modularity and scalability, allowing training and inference with limited computational resources and enabling users to replace components based on their needs. We validate the effectiveness of this framework through experiments, the best model achieving performance comparable to certain existing 7B models on multiple video understanding benchmarks. The code and training recipes are fully open source, with all components and training data publicly available. We hope this work can serve as a baseline for practitioners exploring small-scale multimodal models for video understanding. It is available at https://github.com/ZhangXJ199/TinyLLaVA-Video.

Community

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.15513 in a Space README.md to link it from this page.

Collections including this paper 3