The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
GeoPeP: Geometric Perception for Mathematical Reasoning (diagram-caption subset)
Welcome to the caption-based subset of GeoPeP (Geomatic PercePtion-oriented dataset) β a large-scale dataset designed to enhance geometric perception and visual reasoning capabilities in multimodal large language models (MLLMs).
π Overview
GeoPeP-Caption consists of 100,000 synthetic diagrams paired with automatically generated fine-grained captions. Each caption provides a structured description of the geometric primitives within the diagram, enabling vision-language models to learn detailed spatial reasoning and accurate object grounding.
This dataset is part of the broader GeoPeP project, which includes both caption-based (will be released soon) and instruction-following subsets to bridge the gap between visual perception and mathematical reasoning.
π Download Link
You can access and download the GeoPeP-Caption dataset here:
π GeoPeP-Caption
π¦ Dataset Structure
{
"image": "image_id.png",
"conversations": [
{
"from": "human",
"value": "<image>\nPlease provide structured geometric information for the image."
},
{
"from": "gpt",
"value": "First, I count 3 prominent objects in the image, including 1 equilateral quadrilateral, 1 isosceles trapezoid, 1 pentagon. Next, for the shape information, object BFLV is an equilateral quadrilateral, object CGOY is an isosceles trapezoid, object AHIWX is a pentagon. Furthermore, I also know the fine-grained bounding box coordinates: the equilateral quadrilateral BFLV is located at [0.32, 0.46, 0.56, 0.68], the isosceles trapezoid CGOY is located at [0.23, 0.29, 0.58, 0.61], the pentagon AHIWX is located at [0.22, 0.24, 0.78, 0.76]. Finally, let me explain the relationships: equilateral quadrilateral BFLV is in the bottom right relative to isosceles trapezoid CGOY; equilateral quadrilateral BFLV is in the bottom left relative to pentagon AHIWX."
}
]
}
π€ Intended Use
- Pretraining and fine-tuning Multimodal LLMs.
- Enhancing geometric perception and caption grounding in visual models.
- Supporting research in visual reasoning and math understanding
π Citation
If you use this dataset, please cite our work:
@article{sun2025mathglance,
author = {Sun, Yanpeng and Zhang, Shan and Tang, Wei and Chen, Aotian and Koniusz, Piotr and Zou, Kai and Xue, Yuan and van den Hengel, Anton},
title = {MATHGLANCE: Multimodal Large Language Models Do Not Know Where to Look in Mathematical Diagrams},
journal = {arXiv preprint arXiv:2503.20745},
year = {2025}
}
π¬ Contact
If you have any questions or suggestions, please open an issue or contact shan.zhang@adelaide.edu.au.
- Downloads last month
- 41