Dataset Viewer (First 5GB)
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
960
960
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Datacard

This dataset is the evaluation VLM dataset used in VLABench. It is designed to evaluate the planning capabilities of Vision-Language Models (VLMs) in embodied scenarios.

Source

Uses

The dataset structure is as follows:

vlm_evaluation_v1.0/
├── CommenSence/
    ├── add_condiment_common_sense/
    ├── insert_flower_common_sense/
    ├── select_billiards_common_sense/
    ├── select_chemistry_tube_common_sense/
    ├── select_drink_common_sense/
    ├── select_fruit_common_sense/
    ├── select_nth_largest_poker/
    └── select_toy_common_sense/
├── Complex/
    ├── book_rearrange/
    ├── cook_dishes/
    ├── hammer_nail_and_hang_picture/
    ├── take_chemistry_experiment/
    └── texas_holdem/
├── M&T/
    ├── add_condiment/
    ├── insert_flower/
    ├── select_billiards/
    ├── select_book/
    ├── select_chemistry_tube/
    ├── select_drink/
    ├── select_fruit/
    ├── select_poker/
    └── select_toy/
├── PhysicsLaw/
    ├── density_qa/
    ├── friction_qa/
    ├── magnetism_qa/
    ├── reflection_qa/
    ├── speed_of_sound_qa/
    └── thermal_expansion_qa/
├── Semantic/
    ├── add_condiment_semantic/
    ├── insert_flower_semantic/
    ├── select_billiards_semantic/
    ├── select_book_semantic/
    ├── select_chemistry_tube_semantic/
    ├── select_drink_semantic/
    ├── select_fruit_semantic/
    ├── select_poker_semantic/
    └── select_toy_semantic/
├── Spatial/
    ├── add_condiment_spatial/
    ├── insert_bloom_flower/
    ├── select_billiards_spatial/
    ├── select_book_spatial/
    ├── select_chemistry_tube_spatial/
    ├── select_fruit_spatial/
    ├── select_poker_spatial/
    └── select_toy_spatial/

In each subtask, there are 100 episodes of data, such as:

vlm_evaluation_v1.0/
├── CommenSence/
    └── add_condiment_common_sense/
        ├── example0
            ├── env_config
            ├── input
            └── output     
     
        ├── ...
        └── example99

The env_config folder includes the episode_config for conveniently reproducing the evaluation environment. The input folder includes the stacked four-view images and their segmentated visual prompted images as the visual input to VLMs, as well as the instruction to descripe the task. The output folder includes the ground truth action sequence JSON file.

Evaluate

To evaluate the dataset, please refer to our evaluation guidance in repo.

Citation

If you find our work helps,please cite us:

@misc{zhang2024vlabench,
      title={VLABench: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks}, 
      author={Shiduo Zhang and Zhe Xu and Peiju Liu and Xiaopeng Yu and Yuan Li and Qinghui Gao and Zhaoye Fei and Zhangyue Yin and Zuxuan Wu and Yu-Gang Jiang and Xipeng Qiu},
      year={2024},
      eprint={2412.18194},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2412.18194}, 
}
Downloads last month
2,033