Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

You agree to not use the dataset to conduct experiments that cause harm to human subjects. Please note that the data in this dataset may be subject to other agreements. Before using the data, be sure to read the relevant agreements carefully to ensure compliant use. Video copyrights belong to the original video creators or platforms and are for academic research use only.

Log in or Sign Up to review the conditions and access this dataset content.

CG-AV-Counting

hf_checkpoint hf_data arXiv Webpage

Summary

Despite progress in video understanding, current MLLMs struggle with counting tasks. Existing benchmarks are limited by short videos, close-set queries, lack of clue annotations, and weak multimodal coverage. As a result, we introduce CG-AV-Counting, a manually-annotated clue-grounded counting benchmark with 1,027 multimodal questions and 5,845 annotated clues over 497 long videos. It supports both black-box and white-box evaluation, serving as a comprehensive testbed for both end-to-end and reasoning-based counting.

CG-AV-Counting Summary

Leaderboard

Please visit our project page for the latest leaderboard.

Benchmark Statistics

CG-AV-Counting Statistics
CG-AV-Counting is based on a subset of 497 videos from CG-Bench. The benchmark includes 1,027 multimodal-query questions and 5,845 fine-grained manually-annotated clue annotations. Nearly 40% of the samples require the model to use both audio and visual modalities for counting, while others only require the visual modality. This design ensures that the benchmark is applicable to both visual models and audio-visual models. The benchmark includes object, event, and attribute counting target. Among them, attribute counting is more challenging because it requires grouping objects with the same attribute based on the query.

This benchmark spans a numerical range from 1 to 76, with a long-tail distribution, where most counts fall between 1 and 20. Video content includes over 10 categories, such as sports, life record, humor, tutorials, etc., offering greater domain diversity than existing benchmarks. All videos in the benchmark exceed 10 minutes, and reference intervals range from seconds to minutes, covering both short-term and long-range dependencies.

Benchmark Comparison

CG-AV-Counting Comparison

Existing video counting benchmarks typically suffer from limited modality, content diversity, and reasoning complexity.

First, most prior datasets, such as DVD-Counting, VideoNIAH, and MVBench, only contain visual-only samples. In contrast, CG-AV-Counting introduces a richer query structure with audio-visual interactions, supporting audio-referenced visual queries, visual-referenced audio queries, and joint audio-visual counting—enabling evaluation of complex multimodal reasoning scenarios.

Second, regarding video length, most existing benchmarks rely on short clips (typically under 1 minute). CG-AV-Counting leverages long-form videos (all exceeding 10 minutes), requiring sustained temporal reasoning and long-range clue localization.

Third, in terms of counting targets, previous datasets mostly focus on object or event counting. In contrast, CG-AV-Counting comprehensively covers object, event, and attribute counting, enabling a more fine-grained and versatile evaluation.

Fourth, while prior datasets typically only provide final count labels, CG-AV-Counting includes manually annotated fine-grained counting clues that clearly indicate where and how evidence for the count appears across modalities and time. These annotations not only improve dataset transparency, but also support interpretable and diagnostic evaluation of model behavior.

Finally, beyond standard black-box evaluation of end-to-end counting accuracy, CG-AV-Counting introduces white-box evaluation protocols to assess models’ intermediate reasoning steps. This dual evaluation protocols allow for a more comprehensive and explainable assessment of multimodal counting capabilities.

Overall, CG-AV-Counting significantly broadens the scope and realism of video counting evaluation, establishing a more challenging and representative bnchmark for future multimodal reasoning research.

Experiments Results

CG-AV-Counting Expirements
Downloads last month
27