Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,179 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
# Multi-View UAV Dataset
|
5 |
+
|
6 |
+
A comprehensive multi-view UAV dataset for visual navigation research in GPS-denied urban environments, collected using the CARLA simulator.
|
7 |
+
|
8 |
+
[](https://opensource.org/licenses/MIT)
|
9 |
+
|
10 |
+
## Dataset Overview
|
11 |
+
|
12 |
+
This dataset supports research on visual navigation for unmanned aerial vehicles (UAVs) in GPS-denied urban environments. It features multi-directional camera views collected from simulated UAV flights across diverse urban landscapes, making it ideal for developing localization and navigation algorithms that rely on visual cues rather than GPS signals.
|
13 |
+
|
14 |
+

|
15 |
+
|
16 |
+
## Key Features
|
17 |
+
|
18 |
+
- **Multi-View Perspective**: 5 cameras (Front, Back, Left, Right, Down) providing panoramic visual information
|
19 |
+
- **Multiple Data Types**: RGB images, semantic segmentation, and depth maps for comprehensive scene understanding
|
20 |
+
- **Precise Labels**: Accurate position coordinates and rotation angles for each frame
|
21 |
+
- **Diverse Environments**: 8 different urban maps with varying architectural styles and layouts
|
22 |
+
- **Large Scale**: 357,690 multi-view frames enabling robust algorithm training and evaluation
|
23 |
+
|
24 |
+
## Dataset Structure
|
25 |
+
|
26 |
+
```
|
27 |
+
Dataset_CARLA/town{XX}_YYYYMMDD_HHMMSS/town{XX}_YYYYMMDD_HHMMSS/
|
28 |
+
βββ calibration/
|
29 |
+
β βββ camera_calibration.json # Parameters for all 5 UAV onboard cameras
|
30 |
+
βββ depth/ # Depth images from all cameras
|
31 |
+
β βββ Back/
|
32 |
+
β β βββ NNNNNN.npy # Depth data in NumPy format
|
33 |
+
β β βββ NNNNNN.png # Visualization of depth data
|
34 |
+
β β βββ ...
|
35 |
+
β βββ Down/
|
36 |
+
β βββ Front/
|
37 |
+
β βββ Left/
|
38 |
+
β βββ Right/
|
39 |
+
βββ metadata/ # UAV position, rotation angles and timestamps
|
40 |
+
β βββ NNNNNN.json
|
41 |
+
β βββ NNNNNN.json
|
42 |
+
β βββ ...
|
43 |
+
βββ rgb/ # RGB images from all cameras (PNG format)
|
44 |
+
β βββ Back/
|
45 |
+
β βββ Down/
|
46 |
+
β βββ Front/
|
47 |
+
β βββ Left/
|
48 |
+
β βββ Right/
|
49 |
+
βββ semantic/ # Semantic segmentation images (PNG format)
|
50 |
+
βββ Back/
|
51 |
+
βββ Down/
|
52 |
+
βββ Front/
|
53 |
+
βββ Left/
|
54 |
+
βββ Right/
|
55 |
+
```
|
56 |
+
|
57 |
+
## Data Format Details
|
58 |
+
|
59 |
+
### Image Data
|
60 |
+
- **RGB Images**: 400Γ300 pixel resolution in PNG format
|
61 |
+
- **Semantic Segmentation**: Class-labeled pixels in PNG format
|
62 |
+
- **Depth Maps**:
|
63 |
+
- PNG format for visualization
|
64 |
+
- NumPy (.npy) format for precise depth values
|
65 |
+
|
66 |
+
### Metadata
|
67 |
+
Each frame includes a corresponding JSON file containing:
|
68 |
+
- Precise UAV position coordinates (x, y, z)
|
69 |
+
- Rotation angles (roll, pitch, yaw)
|
70 |
+
- Timestamp information
|
71 |
+
|
72 |
+
### Camera Calibration
|
73 |
+
- Single JSON file with intrinsic and extrinsic parameters for all five cameras
|
74 |
+
|
75 |
+
## Collection Methodology
|
76 |
+
|
77 |
+
The dataset was collected using:
|
78 |
+
- **Simulator**: CARLA open urban driving simulator
|
79 |
+
- **Flight Pattern**: Constant height UAV flight following road-aligned waypoints with random direction changes
|
80 |
+
- **Hardware**: 4ΓRTX 5000 Ada GPUs for simulation and data collection
|
81 |
+
- **Environments**: 8 urban maps (Town01, Town02, Town03, Town04, Town05, Town06, Town07, Town10HD)
|
82 |
+
|
83 |
+
## Visual Examples
|
84 |
+
|
85 |
+
### RGB Camera Views
|
86 |
+

|
87 |
+
|
88 |
+
### Semantic Segmentation Views
|
89 |
+

|
90 |
+
|
91 |
+
### Depth Map Views
|
92 |
+

|
93 |
+
|
94 |
+
## Research Applications
|
95 |
+
|
96 |
+
This dataset enables research in multiple areas:
|
97 |
+
- Visual-based UAV localization in GPS-denied environments
|
98 |
+
- Multi-view feature extraction and fusion
|
99 |
+
- Communication-efficient UAV-edge collaboration
|
100 |
+
- Task-oriented information bottleneck approaches
|
101 |
+
- Deep learning for aerial navigation
|
102 |
+
|
103 |
+
The dataset was specifically designed for the research presented in [Task-Oriented Communications for Visual Navigation with Edge-Aerial Collaboration in Low Altitude Economy](https://www.researchgate.net/publication/391159895_Task-Oriented_Communications_for_Visual_Navigation_with_Edge-Aerial_Collaboration_in_Low_Altitude_Economy).
|
104 |
+
|
105 |
+
## Usage Example
|
106 |
+
|
107 |
+
```python
|
108 |
+
# Basic example to load and visualize data
|
109 |
+
import os
|
110 |
+
import json
|
111 |
+
import numpy as np
|
112 |
+
import matplotlib.pyplot as plt
|
113 |
+
from PIL import Image
|
114 |
+
|
115 |
+
# Set paths
|
116 |
+
dataset_path = "path/to/dataset/town05_20241218_092919/town05_20241218_092919"
|
117 |
+
frame_id = "000000"
|
118 |
+
|
119 |
+
# Load metadata
|
120 |
+
with open(os.path.join(dataset_path, "metadata", f"{frame_id}.json"), "r") as f:
|
121 |
+
metadata = json.load(f)
|
122 |
+
|
123 |
+
# Print UAV position
|
124 |
+
print(f"UAV Position: X={metadata['position']['x']}, Y={metadata['position']['y']}, Z={metadata['position']['z']}")
|
125 |
+
print(f"UAV Rotation: Roll={metadata['rotation']['roll']}, Pitch={metadata['rotation']['pitch']}, Yaw={metadata['rotation']['yaw']}")
|
126 |
+
|
127 |
+
# Load and display RGB image (Front camera)
|
128 |
+
rgb_path = os.path.join(dataset_path, "rgb", "Front", f"{frame_id}.png")
|
129 |
+
rgb_image = Image.open(rgb_path)
|
130 |
+
|
131 |
+
# Load and display semantic image (Front camera)
|
132 |
+
semantic_path = os.path.join(dataset_path, "semantic", "Front", f"{frame_id}.png")
|
133 |
+
semantic_image = Image.open(semantic_path)
|
134 |
+
|
135 |
+
# Load depth data (Front camera)
|
136 |
+
depth_path = os.path.join(dataset_path, "depth", "Front", f"{frame_id}.npy")
|
137 |
+
depth_data = np.load(depth_path)
|
138 |
+
|
139 |
+
# Display images
|
140 |
+
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
|
141 |
+
axes[0].imshow(rgb_image)
|
142 |
+
axes[0].set_title("RGB Image")
|
143 |
+
axes[1].imshow(semantic_image)
|
144 |
+
axes[1].set_title("Semantic Segmentation")
|
145 |
+
axes[2].imshow(depth_data, cmap='plasma')
|
146 |
+
axes[2].set_title("Depth Map")
|
147 |
+
plt.tight_layout()
|
148 |
+
plt.show()
|
149 |
+
```
|
150 |
+
|
151 |
+
## Citation
|
152 |
+
|
153 |
+
If you use this dataset in your research, please cite our paper:
|
154 |
+
|
155 |
+
```bibtex
|
156 |
+
@misc{fang2025taskorientedcommunicationsvisualnavigation,
|
157 |
+
title={Task-Oriented Communications for Visual Navigation with Edge-Aerial Collaboration in Low Altitude Economy},
|
158 |
+
author={Zhengru Fang and Zhenghao Liu and Jingjing Wang and Senkang Hu and Yu Guo and Yiqin Deng and Yuguang Fang},
|
159 |
+
year={2025},
|
160 |
+
eprint={2504.18317},
|
161 |
+
archivePrefix={arXiv},
|
162 |
+
primaryClass={cs.CV},
|
163 |
+
url={https://arxiv.org/abs/2504.18317},
|
164 |
+
}
|
165 |
+
```
|
166 |
+
|
167 |
+
## License
|
168 |
+
|
169 |
+
This dataset is released under the MIT License.
|
170 |
+
|
171 |
+
## Acknowledgments
|
172 |
+
|
173 |
+
This work was supported in part by the Hong Kong SAR Government under the Global STEM Professorship and Research Talent Hub, the Hong Kong Jockey Club under the Hong Kong JC STEM Lab of Smart City (Ref.: 2023-0108), the National Natural Science Foundation of China under Grant No. 62222101 and No. U24A20213, the Beijing Natural Science Foundation under Grant No. L232043 and No. L222039, the Natural Science Foundation of Zhejiang Province under Grant No. LMS25F010007, and the Hong Kong Innovation and Technology Commission under InnoHK Project CIMDA.
|
174 |
+
|
175 |
+
## Contact
|
176 |
+
|
177 |
+
For questions, issues, or collaboration opportunities, please contact:
|
178 |
+
- Email: zhefang4-c [AT] my [DOT] cityu [DOT] edu [DOT] hk
|
179 |
+
- GitHub: [TOC-Edge-Aerial](https://github.com/fangzr/TOC-Edge-Aerial)
|