Peter341 commited on
Commit
7f382b9
Β·
verified Β·
1 Parent(s): d713efe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +179 -3
README.md CHANGED
@@ -1,3 +1,179 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Multi-View UAV Dataset
5
+
6
+ A comprehensive multi-view UAV dataset for visual navigation research in GPS-denied urban environments, collected using the CARLA simulator.
7
+
8
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
9
+
10
+ ## Dataset Overview
11
+
12
+ This dataset supports research on visual navigation for unmanned aerial vehicles (UAVs) in GPS-denied urban environments. It features multi-directional camera views collected from simulated UAV flights across diverse urban landscapes, making it ideal for developing localization and navigation algorithms that rely on visual cues rather than GPS signals.
13
+
14
+ ![RGB Visualization](https://raw.githubusercontent.com/fangzr/TOC-Edge-Aerial/refs/heads/main/figure/rgb_animation.gif)
15
+
16
+ ## Key Features
17
+
18
+ - **Multi-View Perspective**: 5 cameras (Front, Back, Left, Right, Down) providing panoramic visual information
19
+ - **Multiple Data Types**: RGB images, semantic segmentation, and depth maps for comprehensive scene understanding
20
+ - **Precise Labels**: Accurate position coordinates and rotation angles for each frame
21
+ - **Diverse Environments**: 8 different urban maps with varying architectural styles and layouts
22
+ - **Large Scale**: 357,690 multi-view frames enabling robust algorithm training and evaluation
23
+
24
+ ## Dataset Structure
25
+
26
+ ```
27
+ Dataset_CARLA/town{XX}_YYYYMMDD_HHMMSS/town{XX}_YYYYMMDD_HHMMSS/
28
+ β”œβ”€β”€ calibration/
29
+ β”‚ └── camera_calibration.json # Parameters for all 5 UAV onboard cameras
30
+ β”œβ”€β”€ depth/ # Depth images from all cameras
31
+ β”‚ β”œβ”€β”€ Back/
32
+ β”‚ β”‚ β”œβ”€β”€ NNNNNN.npy # Depth data in NumPy format
33
+ β”‚ β”‚ β”œβ”€β”€ NNNNNN.png # Visualization of depth data
34
+ β”‚ β”‚ └── ...
35
+ β”‚ β”œβ”€β”€ Down/
36
+ β”‚ β”œβ”€β”€ Front/
37
+ β”‚ β”œβ”€β”€ Left/
38
+ β”‚ └── Right/
39
+ β”œβ”€β”€ metadata/ # UAV position, rotation angles and timestamps
40
+ β”‚ β”œβ”€β”€ NNNNNN.json
41
+ β”‚ β”œβ”€β”€ NNNNNN.json
42
+ β”‚ └── ...
43
+ β”œβ”€β”€ rgb/ # RGB images from all cameras (PNG format)
44
+ β”‚ β”œβ”€β”€ Back/
45
+ β”‚ β”œβ”€β”€ Down/
46
+ β”‚ β”œβ”€β”€ Front/
47
+ β”‚ β”œβ”€β”€ Left/
48
+ β”‚ └── Right/
49
+ └── semantic/ # Semantic segmentation images (PNG format)
50
+ β”œβ”€β”€ Back/
51
+ β”œβ”€β”€ Down/
52
+ β”œβ”€β”€ Front/
53
+ β”œβ”€β”€ Left/
54
+ └── Right/
55
+ ```
56
+
57
+ ## Data Format Details
58
+
59
+ ### Image Data
60
+ - **RGB Images**: 400Γ—300 pixel resolution in PNG format
61
+ - **Semantic Segmentation**: Class-labeled pixels in PNG format
62
+ - **Depth Maps**:
63
+ - PNG format for visualization
64
+ - NumPy (.npy) format for precise depth values
65
+
66
+ ### Metadata
67
+ Each frame includes a corresponding JSON file containing:
68
+ - Precise UAV position coordinates (x, y, z)
69
+ - Rotation angles (roll, pitch, yaw)
70
+ - Timestamp information
71
+
72
+ ### Camera Calibration
73
+ - Single JSON file with intrinsic and extrinsic parameters for all five cameras
74
+
75
+ ## Collection Methodology
76
+
77
+ The dataset was collected using:
78
+ - **Simulator**: CARLA open urban driving simulator
79
+ - **Flight Pattern**: Constant height UAV flight following road-aligned waypoints with random direction changes
80
+ - **Hardware**: 4Γ—RTX 5000 Ada GPUs for simulation and data collection
81
+ - **Environments**: 8 urban maps (Town01, Town02, Town03, Town04, Town05, Town06, Town07, Town10HD)
82
+
83
+ ## Visual Examples
84
+
85
+ ### RGB Camera Views
86
+ ![RGB Visualization](https://raw.githubusercontent.com/fangzr/TOC-Edge-Aerial/refs/heads/main/figure/rgb_animation.gif)
87
+
88
+ ### Semantic Segmentation Views
89
+ ![Semantic Visualization](https://raw.githubusercontent.com/fangzr/TOC-Edge-Aerial/refs/heads/main/figure/semantic_animation.gif)
90
+
91
+ ### Depth Map Views
92
+ ![Depth Visualization](https://raw.githubusercontent.com/fangzr/TOC-Edge-Aerial/refs/heads/main/figure/depth_animation.gif)
93
+
94
+ ## Research Applications
95
+
96
+ This dataset enables research in multiple areas:
97
+ - Visual-based UAV localization in GPS-denied environments
98
+ - Multi-view feature extraction and fusion
99
+ - Communication-efficient UAV-edge collaboration
100
+ - Task-oriented information bottleneck approaches
101
+ - Deep learning for aerial navigation
102
+
103
+ The dataset was specifically designed for the research presented in [Task-Oriented Communications for Visual Navigation with Edge-Aerial Collaboration in Low Altitude Economy](https://www.researchgate.net/publication/391159895_Task-Oriented_Communications_for_Visual_Navigation_with_Edge-Aerial_Collaboration_in_Low_Altitude_Economy).
104
+
105
+ ## Usage Example
106
+
107
+ ```python
108
+ # Basic example to load and visualize data
109
+ import os
110
+ import json
111
+ import numpy as np
112
+ import matplotlib.pyplot as plt
113
+ from PIL import Image
114
+
115
+ # Set paths
116
+ dataset_path = "path/to/dataset/town05_20241218_092919/town05_20241218_092919"
117
+ frame_id = "000000"
118
+
119
+ # Load metadata
120
+ with open(os.path.join(dataset_path, "metadata", f"{frame_id}.json"), "r") as f:
121
+ metadata = json.load(f)
122
+
123
+ # Print UAV position
124
+ print(f"UAV Position: X={metadata['position']['x']}, Y={metadata['position']['y']}, Z={metadata['position']['z']}")
125
+ print(f"UAV Rotation: Roll={metadata['rotation']['roll']}, Pitch={metadata['rotation']['pitch']}, Yaw={metadata['rotation']['yaw']}")
126
+
127
+ # Load and display RGB image (Front camera)
128
+ rgb_path = os.path.join(dataset_path, "rgb", "Front", f"{frame_id}.png")
129
+ rgb_image = Image.open(rgb_path)
130
+
131
+ # Load and display semantic image (Front camera)
132
+ semantic_path = os.path.join(dataset_path, "semantic", "Front", f"{frame_id}.png")
133
+ semantic_image = Image.open(semantic_path)
134
+
135
+ # Load depth data (Front camera)
136
+ depth_path = os.path.join(dataset_path, "depth", "Front", f"{frame_id}.npy")
137
+ depth_data = np.load(depth_path)
138
+
139
+ # Display images
140
+ fig, axes = plt.subplots(1, 3, figsize=(15, 5))
141
+ axes[0].imshow(rgb_image)
142
+ axes[0].set_title("RGB Image")
143
+ axes[1].imshow(semantic_image)
144
+ axes[1].set_title("Semantic Segmentation")
145
+ axes[2].imshow(depth_data, cmap='plasma')
146
+ axes[2].set_title("Depth Map")
147
+ plt.tight_layout()
148
+ plt.show()
149
+ ```
150
+
151
+ ## Citation
152
+
153
+ If you use this dataset in your research, please cite our paper:
154
+
155
+ ```bibtex
156
+ @misc{fang2025taskorientedcommunicationsvisualnavigation,
157
+ title={Task-Oriented Communications for Visual Navigation with Edge-Aerial Collaboration in Low Altitude Economy},
158
+ author={Zhengru Fang and Zhenghao Liu and Jingjing Wang and Senkang Hu and Yu Guo and Yiqin Deng and Yuguang Fang},
159
+ year={2025},
160
+ eprint={2504.18317},
161
+ archivePrefix={arXiv},
162
+ primaryClass={cs.CV},
163
+ url={https://arxiv.org/abs/2504.18317},
164
+ }
165
+ ```
166
+
167
+ ## License
168
+
169
+ This dataset is released under the MIT License.
170
+
171
+ ## Acknowledgments
172
+
173
+ This work was supported in part by the Hong Kong SAR Government under the Global STEM Professorship and Research Talent Hub, the Hong Kong Jockey Club under the Hong Kong JC STEM Lab of Smart City (Ref.: 2023-0108), the National Natural Science Foundation of China under Grant No. 62222101 and No. U24A20213, the Beijing Natural Science Foundation under Grant No. L232043 and No. L222039, the Natural Science Foundation of Zhejiang Province under Grant No. LMS25F010007, and the Hong Kong Innovation and Technology Commission under InnoHK Project CIMDA.
174
+
175
+ ## Contact
176
+
177
+ For questions, issues, or collaboration opportunities, please contact:
178
+ - Email: zhefang4-c [AT] my [DOT] cityu [DOT] edu [DOT] hk
179
+ - GitHub: [TOC-Edge-Aerial](https://github.com/fangzr/TOC-Edge-Aerial)