---
license: mit
extra_gated_prompt: >-
You agree to not use the dataset to conduct experiments that cause harm to
human subjects. Please note that the data in this dataset may be subject to
other agreements. Before using the data, be sure to read the relevant
agreements carefully to ensure compliant use. Video copyrights belong to the
original video creators or platforms and are for academic research use only.
task_categories:
- visual-question-answering
extra_gated_fields:
Name: text
Company/Organization: text
Country: text
E-Mail: text
modalities:
- Video
- Audio
- Text
configs:
- config_name: default
data_files:
- split: test
path: cg-av-counting_dataviewer.json
language:
- en
size_categories:
- 1K
[](https://huggingface.co/lulidong/AV-Reasoner) [](https://huggingface.co/datasets/CG-Bench/CG-AV-Counting) [](https://arxiv.org/pdf/2506.05328) [](https://av-reasoner.github.io/)
## Updates
- [2025/07/22]
- Since errors in a few clue annotations when converting frame indexes to timestamps, there were errors in the previous benchmark leaderboard, we have reevaluated all models and have updated [the new leaderboard](https://av-reasoner.github.io/).
## Summary
Despite progress in video understanding, current MLLMs struggle with counting tasks. Existing benchmarks are limited by short videos, close-set queries, lack of clue annotations, and weak multimodal coverage. As a result, we introduce CG-AV-Counting, a manually-annotated clue-grounded counting benchmark with 1,027 multimodal questions and 5,845 annotated clues over 497 long videos. It supports both black-box and white-box evaluation, serving as a comprehensive testbed for both end-to-end and reasoning-based counting.
## Leaderboard
Please visit our [project page](https://av-reasoner.github.io/) for the latest leaderboard.
## Benchmark Statistics
CG-AV-Counting is based on a subset of 497 videos from CG-Bench. The benchmark includes 1,027 multimodal-query questions and 5,845 fine-grained manually-annotated clue annotations. Nearly 40% of the samples require the model to use both audio and visual modalities for counting, while others only require the visual modality. This design ensures that the benchmark is applicable to both visual models and audio-visual models. The benchmark includes object, event, and attribute counting target. Among them, attribute counting is more challenging because it requires grouping objects with the same attribute based on the query.
This benchmark spans a numerical range from 1 to 76, with a long-tail distribution, where most counts fall between 1 and 20. Video content includes over 10 categories, such as sports, life record, humor, tutorials, etc., offering greater domain diversity than existing benchmarks. All videos in the benchmark exceed 10 minutes, and reference intervals range from seconds to minutes, covering both short-term and long-range dependencies.
## Benchmark Comparison
Existing video counting benchmarks typically suffer from limited modality, content diversity, and reasoning complexity.
First, most prior datasets, such as DVD-Counting, VideoNIAH, and MVBench, only contain visual-only samples. In contrast, CG-AV-Counting introduces a richer query structure with audio-visual interactions, supporting audio-referenced visual queries, visual-referenced audio queries, and joint audio-visual counting—enabling evaluation of complex multimodal reasoning scenarios.
Second, regarding video length, most existing benchmarks rely on short clips (typically under 1 minute). CG-AV-Counting leverages long-form videos (all exceeding 10 minutes), requiring sustained temporal reasoning and long-range clue localization.
Third, in terms of counting targets, previous datasets mostly focus on object or event counting. In contrast, CG-AV-Counting comprehensively covers object, event, and attribute counting, enabling a more fine-grained and versatile evaluation.
Fourth, while prior datasets typically only provide final count labels, CG-AV-Counting includes manually annotated fine-grained counting clues that clearly indicate where and how evidence for the count appears across modalities and time. These annotations not only improve dataset transparency, but also support interpretable and diagnostic evaluation of model behavior.
Finally, beyond standard black-box evaluation of end-to-end counting accuracy, CG-AV-Counting introduces white-box evaluation protocols to assess models’ intermediate reasoning steps. This dual evaluation protocols allow for a more comprehensive and explainable assessment of multimodal counting capabilities.
Overall, CG-AV-Counting significantly broadens the scope and realism of video counting evaluation, establishing a more challenging and representative bnchmark for future multimodal reasoning research.
## Experiments Results