Datasets:

Modalities:
Image
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
R-Bench-V / README.md
CXY07's picture
Update README.md
3c68919 verified
metadata
license: apache-2.0
dataset_info:
  features:
    - name: catagory
      dtype: string
    - name: question
      dtype: string
    - name: image
      dtype: image
    - name: answer
      dtype: string
  splits:
    - name: full
      num_bytes: 141377742
      num_examples: 803
    - name: math
      num_bytes: 7788840
      num_examples: 176
    - name: physics
      num_bytes: 14724245
      num_examples: 157
    - name: game
      num_bytes: 20972060
      num_examples: 275
    - name: counting
      num_bytes: 97892598
      num_examples: 195
  download_size: 275506340
  dataset_size: 282755485
configs:
  - config_name: default
    data_files:
      - split: full
        path: data/full-*
      - split: math
        path: data/math-*
      - split: physics
        path: data/physics-*
      - split: game
        path: data/game-*
      - split: counting
        path: data/counting-*

R-Bench-V

Introduction

R-Bench-V Official Website

In "R-Bench-V", R denotes reasoning, and V denotes vision-indispensable.

According to statistics on RBench-V, the benchmark spans 4 categories, which are math, physics, counting and game.

It features 803 questions centered on multi-modal outputs, which requires image manipulation, such as generating novel images and constructing auxiliary lines to support reasoning process.

Leaderboard

Model Source Overall w/o Math Math Physics Counting Game
Human Expert 👑 / 82.3 81.7 84.7 69.4 81.0 89.1
OpenAI o3 🥇 Link 25.8 19.5 48.3 20.4 22.1 17.1
OpenAI o4-mini 🥈 Link 20.9 14.6 43.2 12.7 17.4 13.8
Gemini 2.5 pro 🥉 Link 20.2 13.9 42.6 9.6 19.0 12.7
Doubao-1.5-thinking-pro-m Link 17.1 11.0 38.6 13.4 9.7 10.5
OpenAI o1 Link 16.2 11.0 34.7 5.7 12.3 13.1
Doubao-1.5-vision-pro Link 15.6 11.5 30.1 8.9 12.8 12.0
OpenAI GPT-4o-20250327 Link 14.1 11.2 24.4 3.2 13.3 14.2
OpenAI GPT-4.1 Link 13.6 11.7 20.5 5.7 11.3 15.3
Step-R1-V-Mini Link 13.2 8.8 29.0 6.4 10.3 9.1
OpenAI GPT-4.5 Link 12.6 11.0 18.2 2.5 11.8 15.3
Claude-3.7-sonnet Link 11.5 9.1 19.9 3.8 8.7 12.4
QVQ-Max Link 11.0 8.1 21.0 5.7 6.2 10.9
Qwen2.5VL-72B Link 10.6 9.2 15.3 3.8 6.2 14.5
InternVL-3-38B Link 10.0 7.2 20.5 0.6 5.1 12.4
Qwen2.5VL-32B Link 10.0 6.4 22.7 2.5 4.1 10.2
MiniCPM-2.6-o Link 9.7 7.5 17.6 1.3 3.6 13.8
Llama4-Scout (109B MoE) Link 9.5 6.9 18.8 3.2 4.1 10.9
MiniCPM-2.6-V Link 9.1 7.2 15.9 1.3 6.2 11.3
LLaVA-OneVision-72B Link 9.0 8.9 9.1 4.5 4.6 14.5
DeepSeek-VL2 Link 9.0 7.0 15.9 0.6 5.6 11.6
LLaVA-OneVision-7B Link 8.5 6.8 14.2 2.5 4.6 10.9
Qwen2.5VL-7B Link 8.3 7.0 13.1 2.5 3.6 12.0
InternVL-3-8B Link 8.2 6.0 15.9 1.9 5.6 8.7
InternVL-3-14B Link 8.0 7.0 11.4 1.3 5.1 11.6
Qwen2.5-Omni-7B Link 7.7 4.5 11.4 1.9 2.1 7.7

The values in the table represent the Top-1 accuracy, in %

BibTeX

@inproceedings{
  guo2025rbench-v,
  title={RBench-V: A Primary Assessment for Visual Reasoning Models 
    with Multi-modal Outputs},
  author={Meng-Hao Guo, Xuanyu Chu, Qianrui Yang, Zhe-Han Mo, Yiqing Shen, 
    Pei-Lin Li, Xinjie Lin, Jinnian Zhang, Xin-Sheng Chen, Yi Zhang, Kiyohiro Nakayama, 
    Zhengyang Geng, Houwen Peng, Han Hu, Shi-Min Hu},
  year={2025},
  eprint={},
  archivePrefix={},
  primaryClass={},
  url={}, 
}