Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 299, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 304, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Multi-View UAV Dataset

A comprehensive multi-view UAV dataset for visual navigation research in GPS-denied urban environments, collected using the CARLA simulator.

Simulation Environment

License: MIT

Dataset Overview

This dataset supports research on visual navigation for unmanned aerial vehicles (UAVs) in GPS-denied urban environments. It features multi-directional camera views collected from simulated UAV flights across diverse urban landscapes, making it ideal for developing localization and navigation algorithms that rely on visual cues rather than GPS signals.

RGB Visualization

Key Features

  • Multi-View Perspective: 5 cameras (Front, Back, Left, Right, Down) providing panoramic visual information
  • Multiple Data Types: RGB images, semantic segmentation, and depth maps for comprehensive scene understanding
  • Precise Labels: Accurate position coordinates and rotation angles for each frame
  • Diverse Environments: 8 different urban maps with varying architectural styles and layouts
  • Large Scale: 357,690 multi-view frames enabling robust algorithm training and evaluation

Dataset Structure

Multi-View-UAV-Dataset/town{XX}_YYYYMMDD_HHMMSS/
β”œβ”€β”€ calibration/
β”‚   └── camera_calibration.json    # Parameters for all 5 UAV onboard cameras
β”œβ”€β”€ depth/                         # Depth images from all cameras
β”‚   β”œβ”€β”€ Back/
β”‚   β”‚   β”œβ”€β”€ NNNNNN.npy             # Depth data in NumPy format
β”‚   β”‚   β”œβ”€β”€ NNNNNN.png             # Visualization of depth data
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ Down/
β”‚   β”œβ”€β”€ Front/
β”‚   β”œβ”€β”€ Left/
β”‚   └── Right/
β”œβ”€β”€ metadata/                      # UAV position, rotation angles and timestamps
β”‚   β”œβ”€β”€ NNNNNN.json
β”‚   β”œβ”€β”€ NNNNNN.json
β”‚   └── ...
β”œβ”€β”€ rgb/                           # RGB images from all cameras (PNG format)
β”‚   β”œβ”€β”€ Back/
β”‚   β”œβ”€β”€ Down/
β”‚   β”œβ”€β”€ Front/
β”‚   β”œβ”€β”€ Left/
β”‚   └── Right/
└── semantic/                      # Semantic segmentation images (PNG format)
    β”œβ”€β”€ Back/
    β”œβ”€β”€ Down/
    β”œβ”€β”€ Front/
    β”œβ”€β”€ Left/
    └── Right/

Data Format Details

Image Data

  • RGB Images: 400Γ—300 pixel resolution in PNG format
  • Semantic Segmentation: Class-labeled pixels in PNG format
  • Depth Maps:
    • PNG format for visualization
    • NumPy (.npy) format for precise depth values

Metadata

Each frame includes a corresponding JSON file containing:

  • Precise UAV position coordinates (x, y, z)
  • Rotation angles (roll, pitch, yaw)
  • Timestamp information

Camera Calibration

  • Single JSON file with intrinsic and extrinsic parameters for all five cameras

Collection Methodology

The dataset was collected using:

  • Simulator: CARLA open urban driving simulator
  • Flight Pattern: Constant height UAV flight following road-aligned waypoints with random direction changes
  • Hardware: 4Γ—RTX 5000 Ada GPUs for simulation and data collection
  • Environments: 8 urban maps (Town01, Town02, Town03, Town04, Town05, Town06, Town07, Town10HD)

Visual Examples

RGB Camera Views

RGB Visualization

Semantic Segmentation Views

Semantic Visualization

Depth Map Views

Depth Visualization

Research Applications

This dataset enables research in multiple areas:

  • Visual-based UAV localization in GPS-denied environments
  • Multi-view feature extraction and fusion
  • Communication-efficient UAV-edge collaboration
  • Task-oriented information bottleneck approaches
  • Deep learning for aerial navigation

The dataset was specifically designed for the research presented in Task-Oriented Communications for Visual Navigation with Edge-Aerial Collaboration in Low Altitude Economy.

Usage Example

# Basic example to load and visualize data
import os
import json
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image

# Set paths
dataset_path = "path/to/dataset/town05_20241218_092919/town05_20241218_092919"
frame_id = "000000"

# Load metadata
with open(os.path.join(dataset_path, "metadata", f"{frame_id}.json"), "r") as f:
    metadata = json.load(f)
    
# Print UAV position
print(f"UAV Position: X={metadata['position']['x']}, Y={metadata['position']['y']}, Z={metadata['position']['z']}")
print(f"UAV Rotation: Roll={metadata['rotation']['roll']}, Pitch={metadata['rotation']['pitch']}, Yaw={metadata['rotation']['yaw']}")

# Load and display RGB image (Front camera)
rgb_path = os.path.join(dataset_path, "rgb", "Front", f"{frame_id}.png")
rgb_image = Image.open(rgb_path)

# Load and display semantic image (Front camera)
semantic_path = os.path.join(dataset_path, "semantic", "Front", f"{frame_id}.png")
semantic_image = Image.open(semantic_path)

# Load depth data (Front camera)
depth_path = os.path.join(dataset_path, "depth", "Front", f"{frame_id}.npy")
depth_data = np.load(depth_path)

# Display images
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
axes[0].imshow(rgb_image)
axes[0].set_title("RGB Image")
axes[1].imshow(semantic_image)
axes[1].set_title("Semantic Segmentation")
axes[2].imshow(depth_data, cmap='plasma')
axes[2].set_title("Depth Map")
plt.tight_layout()
plt.show()

Citation

If you use this dataset in your research, please cite our paper:

@misc{fang2025taskorientedcommunicationsvisualnavigation,
      title={Task-Oriented Communications for Visual Navigation with Edge-Aerial Collaboration in Low Altitude Economy}, 
      author={Zhengru Fang and Zhenghao Liu and Jingjing Wang and Senkang Hu and Yu Guo and Yiqin Deng and Yuguang Fang},
      year={2025},
      eprint={2504.18317},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.18317}, 
}

License

This dataset is released under the MIT License.

Acknowledgments

This work was supported in part by the Hong Kong SAR Government under the Global STEM Professorship and Research Talent Hub, the Hong Kong Jockey Club under the Hong Kong JC STEM Lab of Smart City (Ref.: 2023-0108), the National Natural Science Foundation of China under Grant No. 62222101 and No. U24A20213, the Beijing Natural Science Foundation under Grant No. L232043 and No. L222039, the Natural Science Foundation of Zhejiang Province under Grant No. LMS25F010007, and the Hong Kong Innovation and Technology Commission under InnoHK Project CIMDA.

Contact

For questions, issues, or collaboration opportunities, please contact:

  • Email: zhefang4-c [AT] my [DOT] cityu [DOT] edu [DOT] hk
  • GitHub: TOC-Edge-Aerial
Downloads last month
103