|
--- |
|
license: apache-2.0 |
|
language: |
|
- zh |
|
- en |
|
tags: |
|
- vlm |
|
- benchmark |
|
- graphic-reasoning |
|
- intelligence-test |
|
--- |
|
# 🧠 ReasonBench: Benchmarking and Improving Visual Language Models for Complex Graphic Reasoning |
|
|
|
|
|
<img src="https://huggingface.co/datasets/cistine/ReasonBench/resolve/main/image_1.jpg" |
|
alt="background" |
|
width="50%"/> |
|
<p style="font-style:italic">image:background</p> |
|
|
|
|
|
## 🌐 Overview |
|
**ReasonBench** is a comprehensive benchmark designed to evaluate Visual Language Models (VLMs) on complex graphical reasoning tasks. It contains **1,613 problems** collected from real-world intelligence tests, covering **11 core cognitive dimensions** and **29 task types**. This benchmark provides a robust framework for assessing VLMs' spatial, relational, and abstract reasoning capabilities. |
|
|
|
**Dataset Type**: Visual Language Reasoning · Graphical Reasoning · Benchmark Evaluation |
|
|
|
**Paper Link**:[https://arxiv.org/abs/2508.00323](https://arxiv.org/abs/2508.00323) |
|
|
|
## 📊 Dataset Structure |
|
### Core Cognitive Dimensions & Task Types |
|
| Cognitive Dimension | Task Type | Count | |
|
|--------------------------|-----------------------------|-------| |
|
| **Positional Patterns** | Translation | 94 | |
|
| | Rotation | 56 | |
|
| | Combination | 30 | |
|
| **Stylistic Patterns** | Crossing | 54 | |
|
| | Addition/Subtraction | 67 | |
|
| | Black/White Operation | 63 | |
|
| **Attribute Patterns** | Symmetry | 109 | |
|
| | Open/Close State | 19 | |
|
| | Combination | 6 | |
|
| **Quantitative Patterns**| Lines | 173 | |
|
| | Faces | 137 | |
|
| | Points | 66 | |
|
| | Elements | 94 | |
|
| | Combination | 50 | |
|
| **Spatial Patterns** | Cubes | 109 | |
|
| | 3D | 46 | |
|
| | Polyhedrons | 17 | |
|
| | Three Views | 40 | |
|
| | Cross-Sections | 35 | |
|
| | Spatial Quantitative Trans. | 10 | |
|
| **Special Patterns** | 2D Combination | 31 | |
|
| | Figure Relations | 40 | |
|
| **Alphanumeric** | Alphanumeric | 27 | |
|
| **B&W Blocks** | Black & White Blocks | 32 | |
|
| **Other Patterns** | Comprehensive | 34 | |
|
| **MENSA** | Task 1 | 35 | |
|
| | Task 2 | 39 | |
|
| **Raven** | Task 1 | 40 | |
|
| | Task 2 | 60 | |
|
|
|
### 🖼️ Input Formats |
|
| Format | Description | |
|
|-----------------------|-------------| |
|
| **Integrated Format** | Presents questions and options in a single image for holistic processing | |
|
| **Separated Format** | Splits questions and options into multiple images for step-by-step reasoning | |
|
|
|
## 🔍 Key Features |
|
- **Multi-format Evaluation**: Supports both integrated and separated input formats |
|
- **Full Accessibility**: Provides public URLs for all images (questions, options, and combined sets) |
|
- **Human Baseline**: Includes human performance metrics for comparison |
|
- **Diverse Tasks**: Covers 29 distinct reasoning task types across 11 cognitive dimensions |
|
|
|
## 🚀 Usage(GPT-4o example) |
|
```python |
|
import base64 |
|
import requests |
|
import os |
|
from openai import OpenAI # Requires openai>=1.0.0 |
|
|
|
# Configuration |
|
api_key = os.getenv("OPENAI_API_KEY") |
|
if not api_key: |
|
raise ValueError("Missing OPENAI_API_KEY environment variable") |
|
|
|
# Initialize client (official SDK approach) |
|
client = OpenAI(api_key=api_key) |
|
|
|
def process_image_question(image_path: str, question: str, max_tokens=300): |
|
"""Send image and question to GPT-4o API""" |
|
# Encode image to base64 |
|
base64_image = base64.b64encode(open(image_path, "rb").read()).decode("utf-8") |
|
|
|
# Construct messages payload |
|
messages = [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "text", "text": question}, |
|
{ |
|
"type": "image_url", |
|
"image_url": { |
|
"url": f"data:image/jpeg;base64,{base64_image}", |
|
"detail": "auto" # Options: low, high, auto |
|
} |
|
} |
|
] |
|
} |
|
] |
|
|
|
# Make API request |
|
response = client.chat.completions.create( |
|
model="gpt-4o", |
|
messages=messages, |
|
max_tokens=max_tokens |
|
) |
|
|
|
return response.choices[0].message.content |
|
|
|
# Example usage |
|
if __name__ == "__main__": |
|
image_path = "path/to/your/image.jpg" # Update with actual path |
|
user_question = "What's in this image?" # Customize your question |
|
|
|
try: |
|
answer = process_image_question(image_path, user_question) |
|
print("AI Response:", answer) |
|
except Exception as e: |
|
print(f"Error: {str(e)}") |
|
``` |
|
|
|
# 🧠 ReasonBench:复杂图形推理的视觉语言模型评估基准 |
|
|
|
## 🌐 概述 |
|
**ReasonBench** 是一个用于评估视觉语言模型(VLMs)在复杂图形推理任务表现的基准测试。数据集包含从真实智力测试中收集的 **1,613个问题**,覆盖**11个核心认知维度**和**29种任务类型**,为评估VLMs的空间、关系和抽象推理能力提供综合框架。 |
|
|
|
**数据集类型**:视觉语言推理 · 图形推理 · 基准评估 |
|
|
|
**论文地址**:[https://arxiv.org/abs/2508.00323](https://arxiv.org/abs/2508.00323) |
|
|
|
## 📊 数据结构 |
|
### 核心认知维度与任务类型 |
|
| 认知维度 | 任务类型 | 数量 | |
|
|---------------------|------------------------|------| |
|
| **位置规律** | 平移 | 94 | |
|
| | 旋转 | 56 | |
|
| | 组合 | 30 | |
|
| **样式规律** | 穿越 | 54 | |
|
| | 加减法 | 67 | |
|
| | 黑白运算 | 63 | |
|
| **属性规律** | 对称 | 109 | |
|
| | 开闭状态 | 19 | |
|
| | 组合 | 6 | |
|
| **数量规律** | 线 | 173 | |
|
| | 面 | 137 | |
|
| | 点 | 66 | |
|
| | 元素 | 94 | |
|
| | 组合 | 50 | |
|
| **空间规律** | 立方体 | 109 | |
|
| | 3D | 46 | |
|
| | 多面体 | 17 | |
|
| | 三视图 | 40 | |
|
| | 剖视图 | 35 | |
|
| | 空间数量变换 | 10 | |
|
| **特殊规律** | 2D组合 | 31 | |
|
| | 图形关系 | 40 | |
|
| **字母数字** | 字母数字 | 27 | |
|
| **黑白块** | 黑白块 | 32 | |
|
| **其他规律** | 综合 | 34 | |
|
| **门萨** | 任务1 | 35 | |
|
| | 任务2 | 39 | |
|
| **瑞文** | 任务1 | 40 | |
|
| | 任务2 | 60 | |
|
|
|
### 🖼️ 输入格式 |
|
| 格式 | 描述 | |
|
|---------------------|------| |
|
| **集成格式** | 问题与选项呈现在单个图形中,便于模型整体处理 | |
|
| **分离格式** | 将问题与选项拆分为多个图形,测试分步推理能力 | |
|
|
|
## 🔍 核心特性 |
|
- **多格式评估**:支持整体式和分隔式两种输入格式 |
|
- **完全开放**:公开所有格式的图片URL(题目、选项、题目+选项) |
|
- **人类基准**:提供人类准确率作为参考基准 |
|
- **多样化任务**:覆盖11个认知维度的29种推理任务 |
|
|
|
## 🚀 使用示例(以openai GPT-4o为例) |
|
```python |
|
import base64 |
|
import requests |
|
import os |
|
|
|
# 配置OpenAI API密钥 |
|
api_key = os.getenv("OPENAI_API_KEY") # 建议将密钥存储在环境变量中 |
|
if not api_key: |
|
raise ValueError("请设置OPENAI_API_KEY环境变量") |
|
|
|
# 图像处理函数 |
|
def encode_image(image_path): |
|
"""将本地图像编码为base64字符串""" |
|
with open(image_path, "rb") as image_file: |
|
return base64.b64encode(image_file.read()).decode('utf-8') |
|
|
|
# 示例图像路径和问题 |
|
image_path = "path/to/your/image.jpg" # 替换为你的图像路径 |
|
question = "描述这张图片的内容" # 替换为你的问题 |
|
|
|
# 构建API请求 |
|
headers = { |
|
"Content-Type": "application/json", |
|
"Authorization": f"Bearer {api_key}" |
|
} |
|
|
|
payload = { |
|
"model": "gpt-4o", # 使用支持图像的模型 |
|
"messages": [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{ |
|
"type": "text", |
|
"text": question |
|
}, |
|
{ |
|
"type": "image_url", |
|
"image_url": { |
|
"url": f"data:image/jpeg;base64,{encode_image(image_path)}" |
|
} |
|
} |
|
] |
|
} |
|
], |
|
"max_tokens": 300 # 控制响应长度 |
|
} |
|
|
|
# 发送请求 |
|
response = requests.post( |
|
"https://api.openai.com/v1/chat/completions", |
|
headers=headers, |
|
json=payload |
|
) |
|
|
|
# 处理响应 |
|
if response.status_code == 200: |
|
result = response.json() |
|
answer = result['choices'][0]['message']['content'] |
|
print("AI回答:", answer) |
|
else: |
|
print("请求失败,状态码:", response.status_code) |
|
print("错误信息:", response.text) |
|
``` |
|
|
|
如果需要引用,请引用下列内容 |
|
``` |
|
{ |
|
author = {Jianyi Zhang and Xu Ji and Ziyin Zhou and Yuchen Zhou and Shubo Shi and Haoyu Wu and Zhen Li and Shizhao Liu}, |
|
title = {Oedipus and the Sphinx: Benchmarking and Improving Visual Language Models for Complex Graphic Reasoning}, |
|
howpublished = {arXiv preprint}, |
|
archivePrefix = {arXiv}, |
|
eprint = {2508.00323}, |
|
primaryClass = {cs.AI}, |
|
year = {2025}, |
|
note = {arXiv:2508.00323v1 [cs.AI]}, |
|
url = {https://arxiv.org/abs/2508.00323} |
|
} |
|
``` |