dere-cbai commited on
Commit
5fc4576
ยท
1 Parent(s): 1a32889

Add collaborative perception interface

Browse files
Files changed (8) hide show
  1. DEPLOYMENT.md +209 -0
  2. README (1).md +364 -0
  3. README.md +16 -6
  4. app.py +293 -0
  5. collaborative-perception.html +1138 -0
  6. index.html +1138 -19
  7. requirements.txt +1 -0
  8. simple-app.py +200 -0
DEPLOYMENT.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ๐Ÿš€ Deployment Guide: Hugging Face Spaces
2
+
3
+ This guide walks you through deploying the **Awesome Multi-Agent Collaborative Perception** interactive website to Hugging Face Spaces.
4
+
5
+ ## ๐Ÿ“‹ Prerequisites
6
+
7
+ - Hugging Face account ([sign up here](https://huggingface.co/join))
8
+ - Git installed on your system
9
+ - Basic familiarity with Git commands
10
+
11
+ ## ๐ŸŽฏ Quick Deployment
12
+
13
+ ### Option 1: Create Space Directly on Hugging Face
14
+
15
+ 1. **Go to Hugging Face Spaces**
16
+ - Visit [huggingface.co/spaces](https://huggingface.co/spaces)
17
+ - Click "Create new Space"
18
+
19
+ 2. **Configure Your Space**
20
+ - **Space name**: `awesome-multi-agent-collaborative-perception`
21
+ - **SDK**: Select "Gradio"
22
+ - **Visibility**: Public (recommended)
23
+ - **License**: MIT
24
+
25
+ 3. **Upload Files**
26
+ - Upload all files from this directory:
27
+ - `app.py`
28
+ - `collaborative-perception.html`
29
+ - `requirements.txt`
30
+ - `README.md`
31
+
32
+ ### Option 2: Git Clone and Push
33
+
34
+ 1. **Create the Space on HF**
35
+ - Follow steps 1-2 from Option 1
36
+ - Copy the Git repository URL
37
+
38
+ 2. **Clone and Setup**
39
+ ```bash
40
+ git clone https://huggingface.co/spaces/YOUR_USERNAME/awesome-multi-agent-collaborative-perception
41
+ cd awesome-multi-agent-collaborative-perception
42
+ ```
43
+
44
+ 3. **Copy Files**
45
+ - Copy all files from this project into the cloned directory
46
+
47
+ 4. **Commit and Push**
48
+ ```bash
49
+ git add .
50
+ git commit -m "Initial deployment of collaborative perception website"
51
+ git push
52
+ ```
53
+
54
+ ## ๐Ÿ”ง File Structure
55
+
56
+ Your Hugging Face Space should have this structure:
57
+
58
+ ```
59
+ awesome-multi-agent-collaborative-perception/
60
+ โ”œโ”€โ”€ app.py # Gradio app entry point
61
+ โ”œโ”€โ”€ collaborative-perception.html # Main interactive website
62
+ โ”œโ”€โ”€ requirements.txt # Python dependencies
63
+ โ”œโ”€โ”€ README.md # Space description with HF metadata
64
+ โ”œโ”€โ”€ .gitignore # Git ignore rules
65
+ โ””โ”€โ”€ DEPLOYMENT.md # This file
66
+ ```
67
+
68
+ ## โš™๏ธ Configuration Files
69
+
70
+ ### `app.py`
71
+ - Simple Gradio wrapper that serves the HTML file
72
+ - Configured for full-width display
73
+ - Minimal overhead for maximum performance
74
+
75
+ ### `requirements.txt`
76
+ - Only requires `gradio==4.44.0`
77
+ - Lightweight dependencies for fast startup
78
+
79
+ ### `README.md`
80
+ - Contains Hugging Face Space metadata in frontmatter
81
+ - Serves as the Space's landing page description
82
+
83
+ ## ๐Ÿ” Metadata Configuration
84
+
85
+ The README.md contains crucial metadata for Hugging Face:
86
+
87
+ ```yaml
88
+ ---
89
+ title: Awesome Multi-Agent Collaborative Perception
90
+ emoji: ๐Ÿค–
91
+ colorFrom: blue
92
+ colorTo: purple
93
+ sdk: gradio
94
+ sdk_version: 4.44.0
95
+ app_file: app.py
96
+ pinned: false
97
+ license: mit
98
+ ---
99
+ ```
100
+
101
+ **Important**: Update `your-username` placeholders in the README with your actual Hugging Face username.
102
+
103
+ ## ๐ŸŽจ Customization
104
+
105
+ ### Update Branding
106
+ 1. Change the `title` and `emoji` in README frontmatter
107
+ 2. Update social links in README badges
108
+ 3. Modify the color scheme (`colorFrom`, `colorTo`)
109
+
110
+ ### Add Analytics
111
+ Add tracking code to `collaborative-perception.html` before `</body>`:
112
+
113
+ ```html
114
+ <!-- Google Analytics or your preferred analytics -->
115
+ <script async src="https://www.googletagmanager.com/gtag/js?id=GA_MEASUREMENT_ID"></script>
116
+ <script>
117
+ window.dataLayer = window.dataLayer || [];
118
+ function gtag(){dataLayer.push(arguments);}
119
+ gtag('js', new Date());
120
+ gtag('config', 'GA_MEASUREMENT_ID');
121
+ </script>
122
+ ```
123
+
124
+ ## ๐Ÿš€ Post-Deployment
125
+
126
+ ### 1. Test Your Space
127
+ - Visit your Space URL: `https://huggingface.co/spaces/YOUR_USERNAME/awesome-multi-agent-collaborative-perception`
128
+ - Test all interactive features
129
+ - Verify responsiveness on mobile devices
130
+
131
+ ### 2. Share Your Space
132
+ - Add the Space URL to your GitHub repository
133
+ - Share on social media using the Space's share button
134
+ - Include in your research papers and presentations
135
+
136
+ ### 3. Monitor Usage
137
+ - Check Space analytics in your HF dashboard
138
+ - Monitor for user feedback and issues
139
+ - Update content regularly
140
+
141
+ ## ๐Ÿ“Š Space Statistics
142
+
143
+ Your Space will track:
144
+ - **Views**: Total page visits
145
+ - **Likes**: User appreciation
146
+ - **Duplicates**: Forks of your Space
147
+ - **Comments**: User feedback
148
+
149
+ ## ๐Ÿ”„ Updates and Maintenance
150
+
151
+ ### Regular Content Updates
152
+ 1. **Monthly**: Update with latest papers and datasets
153
+ 2. **Quarterly**: Refresh conference information
154
+ 3. **Annually**: Major design improvements
155
+
156
+ ### Technical Updates
157
+ ```bash
158
+ # Pull latest changes
159
+ git pull
160
+
161
+ # Make your updates
162
+ # ... edit files ...
163
+
164
+ # Push changes
165
+ git add .
166
+ git commit -m "Update: Added latest CVPR 2025 papers"
167
+ git push
168
+ ```
169
+
170
+ ## ๐Ÿ’ก Pro Tips
171
+
172
+ 1. **Performance**: Keep the HTML file under 2MB for fast loading
173
+ 2. **SEO**: Update meta tags in the HTML for better search visibility
174
+ 3. **Accessibility**: Test with screen readers and keyboard navigation
175
+ 4. **Mobile**: Always test on mobile devices before deployment
176
+ 5. **Backup**: Keep a local backup of your custom content
177
+
178
+ ## ๐Ÿ› Troubleshooting
179
+
180
+ ### Common Issues
181
+
182
+ **Space won't start:**
183
+ - Check `requirements.txt` syntax
184
+ - Verify `app.py` imports work
185
+ - Ensure HTML file exists and is valid
186
+
187
+ **HTML not displaying correctly:**
188
+ - Check file encoding (should be UTF-8)
189
+ - Verify JavaScript console for errors
190
+ - Test HTML file locally first
191
+
192
+ **Gradio errors:**
193
+ - Update to latest Gradio version
194
+ - Check Gradio documentation for breaking changes
195
+ - Simplify the Gradio interface if needed
196
+
197
+ ### Getting Help
198
+
199
+ - **Hugging Face Forum**: [discuss.huggingface.co](https://discuss.huggingface.co)
200
+ - **Discord**: [Hugging Face Discord](https://discord.gg/hugging-face)
201
+ - **Documentation**: [hf.co/docs/hub/spaces](https://huggingface.co/docs/hub/spaces)
202
+
203
+ ## ๐Ÿ“„ License
204
+
205
+ This deployment is licensed under MIT License. See the main README.md for details.
206
+
207
+ ---
208
+
209
+ **๐ŸŽ‰ Ready to deploy? Your interactive collaborative perception website will be live in minutes!**
README (1).md ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Collaborative Perception
2
+
3
+ This repository is a paper digest of recent advances in **collaborative** / **cooperative** / **multi-agent** perception for **V2I** / **V2V** / **V2X** autonomous driving scenario. Papers are listed in alphabetical order of the first character.
4
+
5
+ ### :link:Jump to:
6
+ - ### [[Method and Framework](https://github.com/Little-Podi/Collaborative_Perception#bookmarkmethod-and-framework)]
7
+ - ### [[Dataset and Simulator](https://github.com/Little-Podi/Collaborative_Perception#bookmarkdataset-and-simulator)]
8
+
9
+ Note: I find it hard to fairly compare all methods on each benchmark since some published results are obtained without specified training and testing settings, or even modified model architectures. In fact, many works evaluate all baselines under their own settings and report them. Therefore, it is probably to find inconsistency between papers. Hence, I discard the collection and reproducton of all the benchmarks in a previous update. If you are interested, you can find plenty of results in [this archived version](https://github.com/Little-Podi/Collaborative_Perception/tree/1be25908aea0a9f635ff4852b3a90729cf2b6aac).
10
+
11
+
12
+
13
+ ## :star2:Recommendation
14
+
15
+ ### Helpful Learning Resource:thumbsup::thumbsup::thumbsup:
16
+
17
+ - **(Survey)** Collaborative Perception Datasets for Autonomous Driving: A Review [[paper](https://arxiv.org/abs/2504.12696)], Collaborative Perception for Connected and Autonomous Driving: Challenges, Possible Solutions and Opportunities [[paper](https://arxiv.org/abs/2401.01544)], V2X Cooperative Perception for Autonomous Driving: Recent Advances and Challenges [[paper](https://arxiv.org/abs/2310.03525)], Towards Vehicle-to-Everything Autonomous Driving: A Survey on Collaborative Perception [[paper](https://arxiv.org/abs/2308.16714)], Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges [[paper](https://arxiv.org/abs/2301.06262)], A Survey and Framework of Cooperative Perception: From Heterogeneous Singleton to Hierarchical Cooperation [[paper](https://arxiv.org/abs/2208.10590)]
18
+ - **(Talk)** Vehicle-to-Vehicle (V2V) Communication (Waabi CVPR 24 Tutorial on Self-Driving Cars) [[video](https://youtu.be/yceuUthWz9s)], Vehicle-to-Vehicle (V2V) Communication (Waabi CVPR 23 Tutorial on Self-Driving Cars) [[video](https://youtu.be/T-N51B8mZB8)], The Ultimate Solution for L4 Autonomous Driving [[video](https://youtu.be/cyNxemm4Ujg)], When Vision Transformers Meet Cooperative Perception [[video](https://youtu.be/rLAU4eqoOIU)], Scene Understanding beyond the Visible [[video](https://youtu.be/oz0AnmJZCR4)], Robust Collaborative Perception against Communication Interruption [[video](https://youtu.be/3cIWpMrsyeE)], Collaborative and Adversarial 3D Perception for Autonomous Driving [[video](https://youtu.be/W-AONQMfGi0)], Vehicle-to-Vehicle Communication for Self-Driving [[video](https://youtu.be/oikdOpmIoc4)], Adversarial Robustness for Self-Driving [[video](https://youtu.be/8uBFXzyII5Y)], L4ๆ„Ÿ็Ÿฅ็ณป็ปŸ็š„็ปˆๆžๅฝขๆ€๏ผšๅๅŒ้ฉพ้ฉถ [[video](https://youtu.be/NvixMEDHht4)], CoBEVFlow-่งฃๅ†ณ่ฝฆ-่ฝฆ/่ทฏๅๅŒๆ„Ÿ็Ÿฅ็š„ๆ—ถๅบๅผ‚ๆญฅ้—ฎ้ข˜ [[video](https://youtu.be/IBTgalAjye8)], ๆ–ฐไธ€ไปฃๅไฝœๆ„Ÿ็ŸฅWhere2commๅ‡ๅฐ‘้€šไฟกๅธฆๅฎฝๅไธ‡ๅ€ [[video](https://youtu.be/i5coMk4hkuk)], ไปŽไปปๅŠก็›ธๅ…ณๅˆฐไปปๅŠกๆ— ๅ…ณ็š„ๅคšๆœบๅ™จไบบๅๅŒๆ„Ÿ็Ÿฅ [[video](https://course.zhidx.com/c/MDlkZjcyZDgwZWI4ODBhOGQ4MzM=)], ๅๅŒ่‡ชๅŠจ้ฉพ้ฉถ๏ผšไปฟ็œŸไธŽๆ„Ÿ็Ÿฅ [[video](https://course.zhidx.com/c/MmQ1YWUyMzM1M2I3YzVlZjE1NzM=)], ๅŸบไบŽ็พคไฝ“ๅไฝœ็š„่ถ…่ง†่ทๆ€ๅŠฟๆ„Ÿ็Ÿฅ [[video](https://www.koushare.com/video/videodetail/33015)]
19
+ - **(Library)** V2Xverse: A Codebase for V2X-Based Collaborative End2End Autonomous Driving [[code](https://github.com/CollaborativePerception/V2Xverse)] [[doc](https://collaborativeperception.github.io/V2Xverse)], HEAL: An Extensible Framework for Open Heterogeneous Collaborative Perception [[code](https://github.com/yifanlu0227/HEAL)] [[doc](https://huggingface.co/yifanlu/HEAL)], OpenCOOD: Open Cooperative Detection Framework for Autonomous Driving [[code](https://github.com/DerrickXuNu/OpenCOOD)] [[doc](https://opencood.readthedocs.io/en/latest/index.html)], CoPerception: SDK for Collaborative Perception [[code](https://github.com/coperception/coperception)] [[doc](https://coperception.readthedocs.io/en/latest)], OpenCDA: Simulation Tool Integrated with Prototype Cooperative Driving Automation [[code](https://github.com/ucla-mobility/OpenCDA)] [[doc](https://opencda-documentation.readthedocs.io/en/latest)]
20
+ - **(People)** Runsheng Xu@UCLA [[web](https://derrickxunu.github.io)], Hao Xiang@UCLA [[web](https://xhwind.github.io)], Yiming Li@NYU [[web](https://yimingli-page.github.io)], Zixing Lei@SJTU [[web](https://chezacar.github.io)], Yifan Lu@SJTU [[web](https://yifanlu0227.github.io)], Siqi Fan@THU [[web](https://leofansq.github.io)], Hang Qiu@Waymo [[web](https://hangqiu.github.io)], Dian Chen@UT Austin [[web](https://www.cs.utexas.edu/~dchen)], Yen-Cheng Liu@GaTech [[web](https://ycliu93.github.io)], Tsun-Hsuan Wang@MIT [[web](https://zswang666.github.io)]
21
+ - **(Workshop)** Co-Intelligence@ECCV'24 [[web](https://coop-intelligence.github.io/)], CoPerception@ICRA'23 [[web](https://coperception.github.io)], ScalableAD@ICRA'23 [[web](https://sites.google.com/view/icra2023av/home)]
22
+ - **(Background)** Current Approaches and Future Directions for Point Cloud Object Detection in Intelligent Agents [[video](https://youtu.be/xFFCQVwYeec)], 3D Object Detection for Autonomous Driving: A Review and New Outlooks [[paper](https://arxiv.org/abs/2206.09474)], DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement Learning [[video](https://youtu.be/YBgW2oA_n3k)], A Survey of Multi-Agent Reinforcement Learning with Communication [[paper](https://arxiv.org/abs/2203.08975)]
23
+
24
+ ### Typical Collaboration Modes:handshake::handshake::handshake:
25
+
26
+ ![](mode.png)
27
+
28
+ ### Possible Optimization Directions:fire::fire::fire:
29
+
30
+ ![](direction.png)
31
+
32
+
33
+
34
+ ## :bookmark:Method and Framework
35
+
36
+ Note: {Related} denotes that it is not a pure collaborative perception paper but has related content.
37
+
38
+ ### Selected Preprint
39
+
40
+ - **ACCO** (Is Discretization Fusion All You Need for Collaborative Perception?) [[paper](https://arxiv.org/abs/2503.13946)] [[code](https://github.com/sidiangongyuan/ACCO)]
41
+ - **AR2VP** (Dynamic V2X Autonomous Perception from Road-to-Vehicle Vision) [[paper](https://arxiv.org/abs/2310.19113)] [[code](https://github.com/tjy1423317192/AP2VP)]
42
+ - **CPPC** (Point Cluster: A Compact Message Unit for Communication-Efficient Collaborative Perception) [[paper&review](https://openreview.net/forum?id=54XlM8Clkg)] [~~code~~]
43
+ - **CMP** (CMP: Cooperative Motion Prediction with Multi-Agent Communication) [[paper](https://arxiv.org/abs/2403.17916)] [~~code~~]
44
+ - **CoBEVFusion** (CoBEVFusion: Cooperative Perception with LiDAR-Camera Bird's-Eye View Fusion) [[paper](https://arxiv.org/abs/2310.06008)] [~~code~~]
45
+ - **CoBEVGlue** (Self-Localized Collaborative Perception) [[paper](https://arxiv.org/abs/2406.12712)] [[code](https://github.com/VincentNi0107/CoBEVGlue)]
46
+ - **CoCMT** (CoCMT: Towards Communication-Efficient Corss-Modal Transformer For Collaborative Perception) [[paper&review](https://openreview.net/forum?id=S1NrbfMS7T)] [[code](https://github.com/taco-group/COCMT)]
47
+ - **CoDiff** (CoDiff: Conditional Diffusion Model for Collaborative 3D Object Detection) [[paper](https://arxiv.org/abs/2502.14891)] [~~code~~]
48
+ - **CoDriving** (Towards Collaborative Autonomous Driving: Simulation Platform and End-to-End System) [[paper](https://arxiv.org/abs/2404.09496)] [[code](https://github.com/CollaborativePerception/V2Xverse)]
49
+ - **CoDrivingLLM** (Towards Interactive and Learnable Cooperative Driving Automation: A Large Language Model-Driven Decision-making Framework) [[paper](https://arxiv.org/abs/2409.12812)] [[code](https://github.com/FanGShiYuu/CoDrivingLLM)]
50
+ - **CollaMamba** (CollaMamba: Efficient Collaborative Perception with Cross-Agent Spatial-Temporal State Space Model) [[paper](https://arxiv.org/abs/2409.07714)] [~~code~~]
51
+ - **CoLMDriver** (CoLMDriver: LLM-Based Negotiation Benefits Cooperative Autonomous Driving) [[paper](https://arxiv.org/abs/2503.08683)] [[code](https://github.com/cxliu0314/CoLMDriver)]
52
+ - **CoMamba** (CoMamba: Real-Time Cooperative Perception Unlocked with State Space Models) [[paper](https://arxiv.org/abs/2409.10699)] [~~code~~]
53
+ - **CooPre** (CooPre: Cooperative Pretraining for V2X Cooperative Perception)
54
+ - **CP-Guard+** (CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in Collaborative Perception) [[paper&review](https://openreview.net/forum?id=9MNzHTSDgh)] [~~code~~]
55
+ - **CTCE** (Leveraging Temporal Contexts to Enhance Vehicle-Infrastructure Cooperative Perception) [[paper](https://arxiv.org/abs/2408.10531)] [~~code~~]
56
+ - **Debrief** (Talking Vehicles: Cooperative Driving via Natural Language) [[paper&review](https://openreview.net/forum?id=VYlfoA8I6A)] [~~code~~]
57
+ - **DiffCP** (DiffCP: Ultra-Low Bit Collaborative Perception via Diffusion Model) [[paper](https://arxiv.org/abs/2409.19592)] [~~code~~]
58
+ - **InSPE** (InSPE: Rapid Evaluation of Heterogeneous Multi-Modal Infrastructure Sensor Placement) [[paper](https://arxiv.org/abs/2504.08240)] [~~code~~]
59
+ - **I2XTraj** (Knowledge-Informed Multi-Agent Trajectory Prediction at Signalized Intersections for Infrastructure-to-Everything) [[paper](https://arxiv.org/abs/2501.13461)] [~~code~~]
60
+ - **LangCoop** (LangCoop: Collaborative Driving with Language) [[paper](https://arxiv.org/abs/2504.13406)] [[code](https://github.com/taco-group/LangCoop)]
61
+ - **LCV2I** (LCV2I: Communication-Efficient and High-Performance Collaborative Perception Framework with Low-Resolution LiDAR) [[paper](https://arxiv.org/abs/2502.17039)] [~~code~~]
62
+ - **LMMCoDrive** (LMMCoDrive: Cooperative Driving with Large Multimodal Model) [[paper](https://arxiv.org/abs/2409.11981)] [[code](https://github.com/henryhcliu/LMMCoDrive)]
63
+ - **mmCooper** (mmCooper: A Multi-Agent Multi-Stage Communication-Efficient and Collaboration-Robust Cooperative Perception Framework) [[paper](https://arxiv.org/abs/2501.12263)] [~~code~~]
64
+ - **MOT-CUP** (Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation) [[paper](https://arxiv.org/abs/2303.14346)] [[code](https://github.com/susanbao/mot_cup)]
65
+ - **RopeBEV** (RopeBEV: A Multi-Camera Roadside Perception Network in Birdโ€™s-Eye-View)
66
+ - **ParCon** (ParCon: Noise-Robust Collaborative Perception via Multi-Module Parallel Connection) [[paper](https://arxiv.org/abs/2407.11546)] [~~code~~]
67
+ - **PragComm** (Pragmatic Communication in Multi-Agent Collaborative Perception) [[paper](https://arxiv.org/abs/2401.12694)] [[code](https://github.com/PhyllisH/PragComm)]
68
+ - **QUEST** (QUEST: Query Stream for Vehicle-Infrastructure Cooperative Perception) [[paper](https://arxiv.org/abs/2308.01804)] [~~code~~]
69
+ - **RCDN** (RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-Based 3D Neural Modeling) [[paper](https://arxiv.org/abs/2405.16868)] [~~code~~]
70
+ - **RG-Attn** (RG-Attn: Radian Glue Attention for Multi-Modality Multi-Agent Cooperative Perception) [[paper](https://arxiv.org/abs/2501.16803)] [~~code~~]
71
+ - **RoCo-Sim** (RoCo-Sim: Enhancing Roadside Collaborative Perception through Foreground Simulation) [[paper](https://arxiv.org/abs/2503.10410)] [[code](https://github.com/duyuwen-duen/RoCo-Sim)]
72
+ - **SiCP** (SiCP: Simultaneous Individual and Cooperative Perception for 3D Object Detection in Connected and Automated Vehicles) [[paper](https://arxiv.org/abs/2312.04822)] [[code](https://github.com/DarrenQu/SiCP)]
73
+ - **SparseAlign** (SparseAlign: A Fully Sparse Framework for Cooperative Object Detection) [[paper](https://arxiv.org/abs/2503.12982)] [~~code~~]
74
+ - **Talking Vehicles** (Towards Natural Language Communication for Cooperative Autonomous Driving via Self-Play) [[paper](https://arxiv.org/abs/2505.18334)] [[code](https://github.com/cuijiaxun/talking-vehicles)]
75
+ - **TOCOM-V2I** (Task-Oriented Communication for Vehicle-to-Infrastructure Cooperative Perception) [[paper](https://arxiv.org/abs/2407.20748)] [~~code~~]
76
+ - {Related} **TYP** (Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint in a Driving Scene) [[paper](https://arxiv.org/abs/2502.06682)] [~~code~~]
77
+ - **VIMI** (VIMI: Vehicle-Infrastructure Multi-View Intermediate Fusion for Camera-Based 3D Object Detection) [[paper](https://arxiv.org/abs/2303.10975)] [[code](https://github.com/Bosszhe/VIMI)]
78
+ - **VLIF** (Is Intermediate Fusion All You Need for UAV-Based Collaborative Perception?) [[paper](https://arxiv.org/abs/2504.21774)] [[code](https://github.com/uestchjw/LIF)]
79
+ - **V2V-LLM** (V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multi-Modal Large Language Models) [[paper](https://arxiv.org/abs/2502.09980)] [[code](https://github.com/eddyhkchiu/V2VLLM)]
80
+ - **V2XPnP** (V2XPnP: Vehicle-to-Everything Spatio-Temporal Fusion for Multi-Agent Perception and Prediction) [[paper](https://arxiv.org/abs/2412.01812)] [[code](https://github.com/Zewei-Zhou/V2XPnP)]
81
+ - **V2X-DGPE** (V2X-DGPE: Addressing Domain Gaps and Pose Errors for Robust Collaborative 3D Object Detection) [[paper](https://arxiv.org/abs/2501.02363)] [[code](https://github.com/wangsch10/V2X-DGPE)]
82
+ - **V2X-DGW** (V2X-DGW: Domain Generalization for Multi-Agent Perception under Adverse Weather Conditions) [[paper](https://arxiv.org/abs/2403.11371)] [~~code~~]
83
+ - **V2X-M2C** (V2X-M2C: Efficient Multi-Module Collaborative Perception with Two Connections) [[paper](https://arxiv.org/abs/2407.11546)] [~~code~~]
84
+ - **V2X-PC** (V2X-PC: Vehicle-to-Everything Collaborative Perception via Point Cluster) [[paper](https://arxiv.org/abs/2403.16635)] [~~code~~]
85
+ - **V2X-ReaLO** (V2X-ReaLO: An Open Online Framework and Dataset for Cooperative Perception in Reality) [[paper](https://arxiv.org/abs/2503.10034)] [~~code~~]
86
+
87
+ ### CVPR 2025
88
+
89
+ - **CoGMP** (Generative Map Priors for Collaborative BEV Semantic Segmentation) [~~paper~~] [~~code~~]
90
+ - **CoSDH** (CoSDH: Communication-Efficient Collaborative Perception via Supply-Demand Awareness and Intermediate-Late Hybridization) [[paper](https://arxiv.org/abs/2503.03430)] [[code](https://github.com/Xu2729/CoSDH)]
91
+ - **PolyInter** (One is Plenty: A Polymorphic Feature Interpreter for Immutable Heterogeneous Collaborative Perception) [[paper](https://arxiv.org/abs/2411.16799)] [~~code~~]
92
+ - **SparseAlign** (SparseAlign: A Fully Sparse Framework for Cooperative Object Detection) [~~paper~~] [~~code~~]
93
+ - **V2X-R** (V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with Denoising Diffusion) [[paper](https://arxiv.org/abs/2411.08402)] [[code](https://github.com/ylwhxht/V2X-R)]
94
+
95
+ ### ICLR 2025
96
+
97
+ - **CPPC** (Point Cluster: A Compact Message Unit for Communication-Efficient Collaborative Perception) [[paper&review](https://openreview.net/forum?id=54XlM8Clkg)] [~~code~~]
98
+ - **R&B-POP** (Learning 3D Perception from Others' Predictions) [[paper&review](https://openreview.net/forum?id=Ylk98vWQuQ)] [[code](https://github.com/jinsuyoo/rnb-pop)]
99
+ - **STAMP** (STAMP: Scalable Task- And Model-Agnostic Collaborative Perception) [[paper&review](https://openreview.net/forum?id=8NdNniulYE)] [[code](https://github.com/taco-group/STAMP)]
100
+
101
+ ### AAAI 2025
102
+
103
+ - **CP-Guard** (CP-Guard: Malicious Agent Detection and Defense in Collaborative Bird's Eye View Perception) [[paper](https://arxiv.org/abs/2412.12000)] [~~code~~]
104
+ - **DSRC** (DSRC: Learning Density-Insensitive and Semantic-Aware Collaborative Representation against Corruptions) [[paper](https://arxiv.org/abs/2412.10739)] [[code](https://github.com/Terry9a/DSRC)]
105
+ - **UniV2X** (End-to-End Autonomous Driving through V2X Cooperation) [[paper](https://arxiv.org/abs/2404.00717)] [[code](https://github.com/AIR-THU/UniV2X)]
106
+
107
+ ### ICRA 2025
108
+
109
+ - **CoDynTrust** (CoDynTrust: Robust Asynchronous Collaborative Perception via Dynamic Feature Trust Modulus) [[paper](https://arxiv.org/abs/2502.08169)] [[code](https://github.com/CrazyShout/CoDynTrust)]
110
+ - **CoopDETR** (CoopDETR: A Unified Cooperative Perception Framework for 3D Detection via Object Query) [[paper](https://arxiv.org/abs/2502.19313)] [~~code~~]
111
+ - **Co-MTP** (Co-MTP: A Cooperative Trajectory Prediction Framework with Multi-Temporal Fusion for Autonomous Driving) [[paper](https://arxiv.org/abs/2502.16589)] [[code](https://github.com/xiaomiaozhang/Co-MTP)]
112
+ - **Direct-CP** (Direct-CP: Directed Collaborative Perception for Connected and Autonomous Vehicles via Proactive Attention) [[paper](https://arxiv.org/abs/2409.08840)] [~~code~~]
113
+ - **V2X-DG** (V2X-DG: Domain Generalization for Vehicle-to-Everything Cooperative Perception) [[paper](https://arxiv.org/abs/2503.15435)] [~~code~~]
114
+
115
+ ### CVPR 2024
116
+
117
+ - **CoHFF** (Collaborative Semantic Occupancy Prediction with Hybrid Feature Fusion in Connected Automated Vehicles) [[paper](https://arxiv.org/abs/2402.07635)] [~~code~~]
118
+ - **CoopDet3D** (TUMTraf V2X Cooperative Perception Dataset) [[paper](https://arxiv.org/abs/2403.01316)] [[code](https://github.com/tum-traffic-dataset/coopdet3d)]
119
+ - **CodeFilling** (Communication-Efficient Collaborative Perception via Information Filling with Codebook) [[paper](https://arxiv.org/abs/2405.04966)] [[code](https://github.com/PhyllisH/CodeFilling)]
120
+ - **ERMVP** (ERMVP: Communication-Efficient and Collaboration-Robust Multi-Vehicle Perception in Challenging Environments) [[paper](https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ERMVP_Communication-Efficient_and_Collaboration-Robust_Multi-Vehicle_Perception_in_Challenging_Environments_CVPR_2024_paper.html)] [[code](https://github.com/Terry9a/ERMVP)]
121
+ - **MRCNet** (Multi-Agent Collaborative Perception via Motion-Aware Robust Communication Network) [[paper](https://openaccess.thecvf.com/content/CVPR2024/html/Hong_Multi-agent_Collaborative_Perception_via_Motion-aware_Robust_Communication_Network_CVPR_2024_paper.html)] [[code](https://github.com/IndigoChildren/collaborative-perception-MRCNet)]
122
+
123
+ ### ECCV 2024
124
+
125
+ - **Hetecooper** (Hetecooper: Feature Collaboration Graph for Heterogeneous Collaborative Perception) [[paper](https://eccv.ecva.net/virtual/2024/poster/2467)] [~~code~~]
126
+ - **Infra-Centric CP** (Rethinking the Role of Infrastructure in Collaborative Perception) [[paper](https://arxiv.org/abs/2410.11259)] [~~code~~]
127
+
128
+ ### NeurIPS 2024
129
+
130
+ - **V2X-Graph** (Learning Cooperative Trajectory Representations for Motion Forecasting) [[paper](https://arxiv.org/abs/2311.00371)] [[code](https://github.com/AIR-THU/V2X-Graph)]
131
+
132
+ ### ICLR 2024
133
+
134
+ - **HEAL** (An Extensible Framework for Open Heterogeneous Collaborative Perception) [[paper&review](https://openreview.net/forum?id=KkrDUGIASk)] [[code](https://github.com/yifanlu0227/HEAL)]
135
+
136
+ ### AAAI 2024
137
+
138
+ - **CMiMC** (What Makes Good Collaborative Views? Contrastive Mutual Information Maximization for Multi-Agent Perception) [[paper](https://arxiv.org/abs/2403.10068)] [[code](https://github.com/77SWF/CMiMC)]
139
+ - **DI-V2X** (DI-V2X: Learning Domain-Invariant Representation for Vehicle-Infrastructure Collaborative 3D Object Detection) [[paper](https://arxiv.org/abs/2312.15742)] [[code](https://github.com/Serenos/DI-V2X)]
140
+ - **V2XFormer** (DeepAccident: A Motion and Accident Prediction Benchmark for V2X Autonomous Driving) [[paper](https://arxiv.org/abs/2304.01168)] [[code](https://github.com/tianqi-wang1996/DeepAccident)]
141
+
142
+ ### WACV 2024
143
+
144
+ - **MACP** (MACP: Efficient Model Adaptation for Cooperative Perception) [[paper](https://arxiv.org/abs/2310.16870)] [[code](https://github.com/PurdueDigitalTwin/MACP)]
145
+
146
+ ### ICRA 2024
147
+
148
+ - **DMSTrack** (Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous Driving via Differentiable Multi-Sensor Kalman Filter) [[paper](https://arxiv.org/abs/2309.14655)] [[code](https://github.com/eddyhkchiu/DMSTrack)]
149
+ - **FreeAlign** (Robust Collaborative Perception without External Localization and Clock Devices) [[paper](https://arxiv.org/abs/2405.02965)] [[code](https://github.com/MediaBrain-SJTU/FreeAlign)]
150
+
151
+ ### CVPR 2023
152
+
153
+ - {Related} **BEVHeight** (BEVHeight: A Robust Framework for Vision-Based Roadside 3D Object Detection) [[paper](https://arxiv.org/abs/2303.08498)] [[code](https://github.com/ADLab-AutoDrive/BEVHeight)]
154
+ - **CoCa3D** (Collaboration Helps Camera Overtake LiDAR in 3D Detection) [[paper](https://arxiv.org/abs/2303.13560)] [[code](https://github.com/MediaBrain-SJTU/CoCa3D)]
155
+ - **FF-Tracking** (V2X-Seq: The Large-Scale Sequential Dataset for the Vehicle-Infrastructure Cooperative Perception and Forecasting) [[paper](https://arxiv.org/abs/2305.05938)] [[code](https://github.com/AIR-THU/DAIR-V2X-Seq)]
156
+
157
+ ### NeurIPS 2023
158
+
159
+ - **CoBEVFlow** (Robust Asynchronous Collaborative 3D Detection via Bird's Eye View Flow) [[paper&review](https://openreview.net/forum?id=UHIDdtxmVS)] [[code](https://github.com/MediaBrain-SJTU/CoBEVFlow)]
160
+ - **FFNet** (Flow-Based Feature Fusion for Vehicle-Infrastructure Cooperative 3D Object Detection) [[paper&review](https://openreview.net/forum?id=gsglrhvQxX)] [[code](https://github.com/haibao-yu/FFNet-VIC3D)]
161
+ - **How2comm** (How2comm: Communication-Efficient and Collaboration-Pragmatic Multi-Agent Perception) [[paper&review](https://openreview.net/forum?id=Dbaxm9ujq6)] [[code](https://github.com/ydk122024/How2comm)]
162
+
163
+ ### ICCV 2023
164
+
165
+ - **CORE** (CORE: Cooperative Reconstruction for Multi-Agent Perception) [[paper](https://arxiv.org/abs/2307.11514)] [[code](https://github.com/zllxot/CORE)]
166
+ - **HM-ViT** (HM-ViT: Hetero-Modal Vehicle-to-Vehicle Cooperative Perception with Vision Transformer) [[paper](https://arxiv.org/abs/2304.10628)] [[code](https://github.com/XHwind/HM-ViT)]
167
+ - **ROBOSAC** (Among Us: Adversarially Robust Collaborative Perception by Consensus) [[paper](https://arxiv.org/abs/2303.09495)] [[code](https://github.com/coperception/ROBOSAC)]
168
+ - **SCOPE** (Spatio-Temporal Domain Awareness for Multi-Agent Collaborative Perception) [[paper](https://arxiv.org/abs/2307.13929)] [[code](https://github.com/starfdu1418/SCOPE)]
169
+ - **TransIFF** (TransIFF: An Instance-Level Feature Fusion Framework for Vehicle-Infrastructure Cooperative 3D Detection with Transformers) [[paper](https://openaccess.thecvf.com/content/ICCV2023/html/Chen_TransIFF_An_Instance-Level_Feature_Fusion_Framework_for_Vehicle-Infrastructure_Cooperative_3D_ICCV_2023_paper.html)] [~~code~~]
170
+ - **UMC** (UMC: A Unified Bandwidth-Efficient and Multi-Resolution Based Collaborative Perception Framework) [[paper](https://arxiv.org/abs/2303.12400)] [[code](https://github.com/ispc-lab/UMC)]
171
+
172
+ ### ICLR 2023
173
+
174
+ - {Related} **CO3** (CO3: Cooperative Unsupervised 3D Representation Learning for Autonomous Driving) [[paper&review](https://openreview.net/forum?id=QUaDoIdgo0)] [[code](https://github.com/Runjian-Chen/CO3)]
175
+
176
+ ### CoRL 2023
177
+
178
+ - **BM2CP** {BM2CP: Efficient Collaborative Perception with LiDAR-Camera Modalities} [[paper&review](https://openreview.net/forum?id=uJqxFjF1xWp)] [[code](https://github.com/byzhaoAI/BM2CP)]
179
+
180
+ ### MM 2023
181
+
182
+ - **DUSA** (DUSA: Decoupled Unsupervised Sim2Real Adaptation for Vehicle-to-Everything Collaborative Perception) [[paper](https://arxiv.org/abs/2310.08117)] [[code](https://github.com/refkxh/DUSA)]
183
+ - **FeaCo** (FeaCo: Reaching Robust Feature-Level Consensus in Noisy Pose Conditions) [[paper](https://dl.acm.org/doi/abs/10.1145/3581783.3611880)] [[code](https://github.com/jmgu0212/FeaCo)]
184
+ - **What2comm** (What2comm: Towards Communication-Efficient Collaborative Perception via Feature Decoupling) [[paper](https://dl.acm.org/doi/abs/10.1145/3581783.3611699)] [~~code~~]
185
+
186
+ ### WACV 2023
187
+
188
+ - **AdaFusion** (Adaptive Feature Fusion for Cooperative Perception Using LiDAR Point Clouds) [[paper](https://arxiv.org/abs/2208.00116)] [[code](https://github.com/DonghaoQiao/Adaptive-Feature-Fusion-for-Cooperative-Perception)]
189
+
190
+ ### ICRA 2023
191
+
192
+ - **CoAlign** (Robust Collaborative 3D Object Detection in Presence of Pose Errors) [[paper](https://arxiv.org/abs/2211.07214)] [[code](https://github.com/yifanlu0227/CoAlign)]
193
+ - {Related} **DMGM** (Deep Masked Graph Matching for Correspondence Identification in Collaborative Perception) [[paper](https://arxiv.org/abs/2303.07555)] [[code](https://github.com/gaopeng5/DMGM)]
194
+ - **Double-M Quantification** (Uncertainty Quantification of Collaborative Detection for Self-Driving) [[paper](https://arxiv.org/abs/2209.08162)] [[code](https://github.com/coperception/double-m-quantification)]
195
+ - **MAMP** (Model-Agnostic Multi-Agent Perception Framework) [[paper](https://arxiv.org/abs/2203.13168)] [[code](https://github.com/DerrickXuNu/model_anostic)]
196
+ - **MATE** (Communication-Critical Planning via Multi-Agent Trajectory Exchange) [[paper](https://arxiv.org/abs/2303.06080)] [~~code~~]
197
+ - **MPDA** (Bridging the Domain Gap for Multi-Agent Perception) [[paper](https://arxiv.org/abs/2210.08451)] [[code](https://github.com/DerrickXuNu/MPDA)]
198
+ - **WNT** (We Need to Talk: Identifying and Overcoming Communication-Critical Scenarios for Self-Driving) [[paper](https://arxiv.org/abs/2305.04352)] [~~code~~]
199
+
200
+ ### CVPR 2022
201
+
202
+ - **Coopernaut** (COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles) [[paper](https://arxiv.org/abs/2205.02222)] [[code](https://github.com/UT-Austin-RPL/Coopernaut)]
203
+ - {Related} **LAV** (Learning from All Vehicles) [[paper](https://arxiv.org/abs/2203.11934)] [[code](https://github.com/dotchen/LAV)]
204
+ - **TCLF** (DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection) [[paper](https://arxiv.org/abs/2204.05575)] [[code](https://github.com/AIR-THU/DAIR-V2X)]
205
+
206
+ ### NeurIPS 2022
207
+
208
+ - **Where2comm** (Where2comm: Efficient Collaborative Perception via Spatial Confidence Maps) [[paper&review](https://openreview.net/forum?id=dLL4KXzKUpS)] [[code](https://github.com/MediaBrain-SJTU/where2comm)]
209
+
210
+ ### ECCV 2022
211
+
212
+ - **SyncNet** (Latency-Aware Collaborative Perception) [[paper](https://arxiv.org/abs/2207.08560)] [[code](https://github.com/MediaBrain-SJTU/SyncNet)]
213
+ - **V2X-ViT** (V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer) [[paper](https://arxiv.org/abs/2203.10638)] [[code](https://github.com/DerrickXuNu/v2x-vit)]
214
+
215
+ ### CoRL 2022
216
+
217
+ - **CoBEVT** (CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers) [[paper&review](https://openreview.net/forum?id=PAFEQQtDf8s)] [[code](https://github.com/DerrickXuNu/CoBEVT)]
218
+ - **STAR** (Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception) [[paper&review](https://openreview.net/forum?id=hW0tcXOJas2)] [[code](https://github.com/coperception/star)]
219
+
220
+ ### IJCAI 2022
221
+
222
+ - **IA-RCP** (Robust Collaborative Perception against Communication Interruption) [[paper](https://learn-to-race.org/workshop-ai4ad-ijcai2022/papers.html)] [~~code~~]
223
+
224
+ ### MM 2022
225
+
226
+ - **CRCNet** (Complementarity-Enhanced and Redundancy-Minimized Collaboration Network for Multi-agent Perception) [[paper](https://dl.acm.org/doi/abs/10.1145/3503161.3548197)] [~~code~~]
227
+
228
+ ### ICRA 2022
229
+
230
+ - **AttFuse** (OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication) [[paper](https://arxiv.org/abs/2109.07644)] [[code](https://github.com/DerrickXuNu/OpenCOOD)]
231
+ - **MP-Pose** (Multi-Robot Collaborative Perception with Graph Neural Networks) [[paper](https://arxiv.org/abs/2201.01760)] [~~code~~]
232
+
233
+ ### NeurIPS 2021
234
+
235
+ - **DiscoNet** (Learning Distilled Collaboration Graph for Multi-Agent Perception) [[paper&review](https://openreview.net/forum?id=ZRcjSOmYraB)] [[code](https://github.com/ai4ce/DiscoNet)]
236
+
237
+ ### ICCV 2021
238
+
239
+ - **Adversarial V2V** (Adversarial Attacks On Multi-Agent Communication) [[paper](https://arxiv.org/abs/2101.06560)] [~~code~~]
240
+
241
+ ### IROS 2021
242
+
243
+ - **MASH** (Overcoming Obstructions via Bandwidth-Limited Multi-Agent Spatial Handshaking) [[paper](https://arxiv.org/abs/2107.00771)] [[code](https://github.com/yifanlu0227/CoAlign)]
244
+
245
+ ### CVPR 2020
246
+
247
+ - **When2com** (When2com: Multi-Agent Perception via Communication Graph Grouping) [[paper](https://arxiv.org/abs/2006.00176)] [[code](https://github.com/GT-RIPL/MultiAgentPerception)]
248
+
249
+ ### ECCV 2020
250
+
251
+ - **DSDNet** (DSDNet: Deep Structured Self-Driving Network) [[paper](https://arxiv.org/abs/2008.06041)] [~~code~~]
252
+ - **V2VNet** (V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction) [[paper](https://arxiv.org/abs/2008.07519)] [[code](https://github.com/DerrickXuNu/OpenCOOD)]
253
+
254
+ ### CoRL 2020
255
+
256
+ - **Robust V2V** (Learning to Communicate and Correct Pose Errors) [[paper](https://arxiv.org/abs/2011.05289)] [[code](https://github.com/yifanlu0227/CoAlign)]
257
+
258
+ ### ICRA 2020
259
+
260
+ - **Who2com** (Who2com: Collaborative Perception via Learnable Handshake Communication) [[paper](https://arxiv.org/abs/2003.09575)] [[code](https://github.com/GT-RIPL/MultiAgentPerception)]
261
+ - **MAIN** (Enhancing Multi-Robot Perception via Learned Data Association) [[paper](https://arxiv.org/abs/2107.00769)] [~~code~~]
262
+
263
+
264
+
265
+ ## :bookmark:Dataset and Simulator
266
+
267
+ Note: {Real} denotes that the sensor data is obtained by real-world collection instead of simulation.
268
+
269
+ ### Selected Preprint
270
+
271
+ - **Adver-City** (Adver-City: Open-Source Multi-Modal Dataset for Collaborative Perception Under Adverse Weather Conditions) [[paper](https://arxiv.org/abs/2410.06380)] [[code](https://github.com/QUARRG/Adver-City)] [[project](https://labs.cs.queensu.ca/quarrg/datasets/adver-city)]
272
+ - **CP-GuardBench** (CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in Collaborative Perception) [[paper&review](https://openreview.net/forum?id=9MNzHTSDgh)] [~~code~~] [~~project~~]
273
+ - **Griffin** (Griffin: Aerial-Ground Cooperative Detection and Tracking Dataset and Benchmark) [[paper](https://arxiv.org/abs/2503.06983)] [[code](https://github.com/wang-jh18-SVM/Griffin)] [[project](https://pan.baidu.com/s/1NDgsuHB-QPRiROV73NRU5g)]
274
+ - {Real} **InScope** (InScope: A New Real-world 3D Infrastructure-side Collaborative Perception Dataset for Open Traffic Scenarios) [[paper](https://arxiv.org/abs/2407.21581)] [[code](https://github.com/xf-zh/InScope)] [~~project~~]
275
+ - {Real} **Mixed Signals** (Mixed Signals: A Diverse Point Cloud Dataset for Heterogeneous LiDAR V2X Collaboration) [[paper](https://arxiv.org/abs/2502.14156)] [[code](https://github.com/chinitaberrio/Mixed-Signals)] [[project](https://mixedsignalsdataset.cs.cornell.edu)]
276
+ - **Multi-V2X** (Multi-V2X: A Large Scale Multi-modal Multi-penetration-rate Dataset for Cooperative Perception) [[paper](https://arxiv.org/abs/2409.04980)] [[code](https://github.com/RadetzkyLi/Multi-V2X)] [~~project~~]
277
+ - **M3CAD** (M3CAD: Towards Generic Cooperative Autonomous Driving Benchmark) [[paper](https://arxiv.org/abs/2505.06746)] [[code](https://github.com/zhumorui/M3CAD)] [~~project~~]
278
+ - **OPV2V-N** (RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling) [[paper](https://arxiv.org/abs/2405.16868)] [~~code~~] [~~project~~]
279
+ - **TalkingVehiclesGym** (Towards Natural Language Communication for Cooperative Autonomous Driving via Self-Play) [[paper](https://arxiv.org/abs/2505.18334)] [[code](https://github.com/cuijiaxun/talking-vehicles)] [[project](https://talking-vehicles.github.io)]
280
+ - **V2V-QA** (V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multi-Modal Large Language Models) [[paper](https://arxiv.org/abs/2502.09980)] [[code](https://github.com/eddyhkchiu/V2VLLM)] [[project](https://eddyhkchiu.github.io/v2vllm.github.io)]
281
+ - {Real} **V2XPnP-Seq** (V2XPnP: Vehicle-to-Everything Spatio-Temporal Fusion for Multi-Agent Perception and Prediction) [[paper](https://arxiv.org/abs/2412.01812)] [[code](https://github.com/Zewei-Zhou/V2XPnP)] [[project](https://mobility-lab.seas.ucla.edu/v2xpnp)]
282
+ - {Real} **V2X-Radar** (V2X-Radar: A Multi-Modal Dataset with 4D Radar for Cooperative Perception) [[paper](https://arxiv.org/abs/2411.10962)] [[code](https://github.com/yanglei18/V2X-Radar)] [[project](http://openmpd.com/column/V2X-Radar)]
283
+ - {Real} **V2X-Real** (V2X-Real: a Large-Scale Dataset for Vehicle-to-Everything Cooperative Perception) [[paper](https://arxiv.org/abs/2403.16034)] [~~code~~] [[project](https://mobility-lab.seas.ucla.edu/v2x-real)]
284
+ - {Real} **V2X-ReaLO** (V2X-ReaLO: An Open Online Framework and Dataset for Cooperative Perception in Reality) [[paper](https://arxiv.org/abs/2503.10034)] [~~code~~] [~~project~~]
285
+ - **WHALES** (WHALES: A Multi-Agent Scheduling Dataset for Enhanced Cooperation in Autonomous Driving) [[paper](https://arxiv.org/abs/2411.13340)] [[code](https://github.com/chensiweiTHU/WHALES)] [[project](https://pan.baidu.com/s/1dintX-d1T-m2uACqDlAM9A)]
286
+
287
+ ### CVPR 2025
288
+
289
+ - **Mono3DVLT-V2X** (Mono3DVLT: Monocular-Video-Based 3D Visual Language Tracking) [~~paper~~] [~~code~~] [~~project~~]
290
+ - **RCP-Bench** (RCP-Bench: Benchmarking Robustness for Collaborative Perception Under Diverse Corruptions) [~~paper~~] [~~code~~] [~~project~~]
291
+ - **V2X-R** (V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with Denoising Diffusion) [[paper](https://arxiv.org/abs/2411.08402)] [[code](https://github.com/ylwhxht/V2X-R)] [~~project~~]
292
+
293
+ ### CVPR 2024
294
+
295
+ - {Real} **HoloVIC** (HoloVIC: Large-Scale Dataset and Benchmark for Multi-Sensor Holographic Intersection and Vehicle-Infrastructure Cooperative) [[paper](https://arxiv.org/abs/2403.02640)] [~~code~~] [[project](https://holovic.net)]
296
+ - {Real} **Open Mars Dataset** (Multiagent Multitraversal Multimodal Self-Driving: Open MARS Dataset) [[code](https://github.com/ai4ce/MARS)] [[paper](https://arxiv.org/abs/2406.09383)] [[project](https://ai4ce.github.io/MARS)]
297
+ - {Real} **RCooper** (RCooper: A Real-World Large-Scale Dataset for Roadside Cooperative Perception) [[paper](https://arxiv.org/abs/2403.10145)] [[code](https://github.com/AIR-THU/DAIR-RCooper)] [[project](https://www.t3caic.com/qingzhen)]
298
+ - {Real} **TUMTraf-V2X** (TUMTraf V2X Cooperative Perception Dataset) [[paper](https://arxiv.org/abs/2403.01316)] [[code](https://github.com/tum-traffic-dataset/tum-traffic-dataset-dev-kit)] [[project](https://tum-traffic-dataset.github.io/tumtraf-v2x)]
299
+
300
+ ### ECCV 2024
301
+
302
+ - {Real} **H-V2X** (H-V2X: A Large Scale Highway Dataset for BEV Perception) [[paper](https://eccv2024.ecva.net/virtual/2024/poster/126)] [~~code~~] [~~project~~]
303
+
304
+ ### NeurIPS 2024
305
+
306
+ - {Real} **DAIR-V2X-Traj** (Learning Cooperative Trajectory Representations for Motion Forecasting) [[paper](https://arxiv.org/abs/2311.00371)] [[code](https://github.com/AIR-THU/V2X-Graph)] [[project](https://thudair.baai.ac.cn/index)]
307
+
308
+ ### ICLR 2024
309
+
310
+ - **OPV2V-H** (An Extensible Framework for Open Heterogeneous Collaborative Perception) [[paper&review](https://openreview.net/forum?id=KkrDUGIASk)] [[code](https://github.com/yifanlu0227/HEAL)] [[project](https://huggingface.co/datasets/yifanlu/OPV2V-H)]
311
+
312
+ ### AAAI 2024
313
+
314
+ - **DeepAccident** (DeepAccident: A Motion and Accident Prediction Benchmark for V2X Autonomous Driving) [[paper](https://arxiv.org/abs/2304.01168)] [[code](https://github.com/tianqi-wang1996/DeepAccident)] [[project](https://deepaccident.github.io)]
315
+
316
+ ### CVPR 2023
317
+
318
+ - **CoPerception-UAV+** (Collaboration Helps Camera Overtake LiDAR in 3D Detection) [[paper](https://arxiv.org/abs/2303.13560)] [[code](https://github.com/MediaBrain-SJTU/CoCa3D)] [[project](https://siheng-chen.github.io/dataset/CoPerception+)]
319
+ - **OPV2V+** (Collaboration Helps Camera Overtake LiDAR in 3D Detection) [[paper](https://arxiv.org/abs/2303.13560)] [[code](https://github.com/MediaBrain-SJTU/CoCa3D)] [[project](https://siheng-chen.github.io/dataset/CoPerception+)]
320
+ - {Real} **V2V4Real** (V2V4Real: A Large-Scale Real-World Dataset for Vehicle-to-Vehicle Cooperative Perception) [[paper](https://arxiv.org/abs/2303.07601)] [[code](https://github.com/ucla-mobility/V2V4Real)] [[project](https://mobility-lab.seas.ucla.edu/v2v4real)]
321
+ - {Real} **DAIR-V2X-Seq** (V2X-Seq: The Large-Scale Sequential Dataset for the Vehicle-Infrastructure Cooperative Perception and Forecasting) [[paper](https://arxiv.org/abs/2305.05938)] [[code](https://github.com/AIR-THU/DAIR-V2X-Seq)] [[project](https://thudair.baai.ac.cn/index)]
322
+
323
+ ### NeurIPS 2023
324
+
325
+ - **IRV2V** (Robust Asynchronous Collaborative 3D Detection via Bird's Eye View Flow) [[paper&review](https://openreview.net/forum?id=UHIDdtxmVS)] [~~code~~] [~~project~~]
326
+
327
+ ### ICCV 2023
328
+
329
+ - **Roadside-Opt** (Optimizing the Placement of Roadside LiDARs for Autonomous Driving) [[paper](https://arxiv.org/abs/2310.07247)] [~~code~~] [~~project~~]
330
+
331
+ ### ICRA 2023
332
+
333
+ - {Real} **DAIR-V2X-C Complemented** (Robust Collaborative 3D Object Detection in Presence of Pose Errors) [[paper](https://arxiv.org/abs/2211.07214)] [[code](https://github.com/yifanlu0227/CoAlign)] [[project](https://siheng-chen.github.io/dataset/dair-v2x-c-complemented)]
334
+ - **RLS** (Analyzing Infrastructure LiDAR Placement with Realistic LiDAR Simulation Library) [[paper](https://arxiv.org/abs/2211.15975)] [[code](https://github.com/PJLab-ADG/LiDARSimLib-and-Placement-Evaluation)] [~~project~~]
335
+ - **V2XP-ASG** (V2XP-ASG: Generating Adversarial Scenes for Vehicle-to-Everything Perception) [[paper](https://arxiv.org/abs/2209.13679)] [[code](https://github.com/XHwind/V2XP-ASG)] [~~project~~]
336
+
337
+ ### CVPR 2022
338
+
339
+ - **AutoCastSim** (COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles) [[paper](https://arxiv.org/abs/2205.02222)] [[code](https://github.com/hangqiu/AutoCastSim)] [[project](https://utexas.app.box.com/v/coopernaut-dataset)]
340
+ - {Real} **DAIR-V2X** (DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection) [[paper](https://arxiv.org/abs/2204.05575)] [[code](https://github.com/AIR-THU/DAIR-V2X)] [[project](https://thudair.baai.ac.cn/index)]
341
+
342
+ ### NeurIPS 2022
343
+
344
+ - **CoPerception-UAV** (Where2comm: Efficient Collaborative Perception via Spatial Confidence Maps) [[paper&review](https://openreview.net/forum?id=dLL4KXzKUpS)] [[code](https://github.com/MediaBrain-SJTU/where2comm)] [[project](https://siheng-chen.github.io/dataset/coperception-uav)]
345
+
346
+ ### ECCV 2022
347
+
348
+ - **V2XSet** (V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer) [[paper](https://arxiv.org/abs/2203.10638)] [[code](https://github.com/DerrickXuNu/v2x-vit)] [[project](https://drive.google.com/drive/folders/1r5sPiBEvo8Xby-nMaWUTnJIPK6WhY1B6)]
349
+
350
+ ### ICRA 2022
351
+
352
+ - **OPV2V** (OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication) [[paper](https://arxiv.org/abs/2109.07644)] [[code](https://github.com/DerrickXuNu/OpenCOOD)] [[project](https://mobility-lab.seas.ucla.edu/opv2v)]
353
+
354
+ ### ACCV 2022
355
+
356
+ - **DOLPHINS** (DOLPHINS: Dataset for Collaborative Perception Enabled Harmonious and Interconnected Self-Driving) [[paper](https://arxiv.org/abs/2207.07609)] [[code](https://github.com/explosion5/Dolphins)] [[project](https://dolphins-dataset.net)]
357
+
358
+ ### ICCV 2021
359
+
360
+ - **V2X-Sim** (V2X-Sim: Multi-Agent Collaborative Perception Dataset and Benchmark for Autonomous Driving) [[paper](https://arxiv.org/abs/2202.08449)] [[code](https://github.com/ai4ce/V2X-Sim)] [[project](https://ai4ce.github.io/V2X-Sim)]
361
+
362
+ ### CoRL 2017
363
+
364
+ - **CARLA** (CARLA: An Open Urban Driving Simulator) [[paper](https://arxiv.org/abs/1711.03938)] [[code](https://github.com/carla-simulator/carla)] [[project](https://carla.org)]
README.md CHANGED
@@ -1,11 +1,21 @@
1
  ---
2
- title: Awesome Multi Agent Collaborative Perception
3
- emoji: ๐Ÿ“Š
4
- colorFrom: gray
5
- colorTo: gray
6
  sdk: static
7
  pinned: false
8
- license: mit
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Awesome Multi-Agent Collaborative Perception
3
+ emoji: ๐Ÿค–
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: static
7
  pinned: false
 
8
  ---
9
 
10
+ # Awesome Multi-Agent Collaborative Perception
11
+
12
+ Interactive web interface for exploring cutting-edge resources in Multi-Agent Collaborative Perception, Prediction, and Planning.
13
+
14
+ ## Features
15
+ - ๐Ÿ“Š 200+ Research Papers
16
+ - ๐Ÿ—ƒ๏ธ 25+ Datasets
17
+ - ๐Ÿ’ป 50+ Code Repositories
18
+ - ๐Ÿ” Interactive Search & Filtering
19
+ - ๐Ÿ“ฑ Responsive Design
20
+
21
+ Visit the live demo: [Your Space URL]
app.py ADDED
@@ -0,0 +1,293 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+
3
+ # Sample data for demonstration
4
+ perception_papers = [
5
+ {
6
+ "title": "CoSDH: Communication-Efficient Collaborative Perception",
7
+ "venue": "CVPR 2025",
8
+ "description": "Novel approach for efficient collaborative perception using supply-demand awareness.",
9
+ "link": "https://arxiv.org/abs/2503.03430"
10
+ },
11
+ {
12
+ "title": "V2X-R: Cooperative LiDAR-4D Radar Fusion",
13
+ "venue": "CVPR 2025",
14
+ "description": "Cooperative fusion of LiDAR and 4D radar sensors for enhanced 3D object detection.",
15
+ "link": "https://arxiv.org/abs/2411.08402"
16
+ },
17
+ {
18
+ "title": "Where2comm: Efficient Collaborative Perception via Spatial Confidence Maps",
19
+ "venue": "NeurIPS 2022",
20
+ "description": "Groundbreaking work on efficient collaborative perception using spatial confidence maps.",
21
+ "link": "https://openreview.net/forum?id=dLL4KXzKUpS"
22
+ },
23
+ {
24
+ "title": "STAMP: Scalable Task-Agnostic Collaborative Perception",
25
+ "venue": "ICLR 2025",
26
+ "description": "Framework for scalable collaborative perception that is both task and model agnostic.",
27
+ "link": "https://openreview.net/forum?id=8NdNniulYE"
28
+ },
29
+ {
30
+ "title": "CoBEVFlow: Robust Asynchronous Collaborative 3D Detection",
31
+ "venue": "NeurIPS 2023",
32
+ "description": "Handles temporal asynchrony in collaborative perception using bird's eye view flow.",
33
+ "link": "https://openreview.net/forum?id=UHIDdtxmVS"
34
+ }
35
+ ]
36
+
37
+ datasets_data = [
38
+ ["DAIR-V2X", "2022", "Real-world", "V2I", "71K frames", "3D boxes, Infrastructure"],
39
+ ["V2V4Real", "2023", "Real-world", "V2V", "20K frames", "Real V2V, Highway"],
40
+ ["TUMTraf-V2X", "2024", "Real-world", "V2X", "2K sequences", "Dense labels, Urban"],
41
+ ["OPV2V", "2022", "Simulation", "V2V", "Large-scale", "CARLA, Multi-agent"],
42
+ ["V2X-Sim", "2021", "Simulation", "Multi", "Scalable", "Multi-agent, Collaborative"],
43
+ ["DOLPHINS", "2024", "Simulation", "UAV", "UAV swarms", "AirSim, Multi-UAV"]
44
+ ]
45
+
46
+ def create_paper_card(paper):
47
+ return f"""
48
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; margin: 10px 0; background: white; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
49
+ <div style="background: #667eea; color: white; padding: 5px 10px; border-radius: 15px; display: inline-block; font-size: 0.8em; margin-bottom: 10px;">
50
+ {paper['venue']}
51
+ </div>
52
+ <h3 style="color: #333; margin: 10px 0;">{paper['title']}</h3>
53
+ <p style="color: #666; line-height: 1.5; margin-bottom: 15px;">{paper['description']}</p>
54
+ <a href="{paper['link']}" target="_blank" style="background: #667eea; color: white; padding: 8px 15px; border-radius: 5px; text-decoration: none; font-size: 0.9em;">
55
+ ๐Ÿ“„ Read Paper
56
+ </a>
57
+ </div>
58
+ """
59
+
60
+ # Custom CSS
61
+ custom_css = """
62
+ .gradio-container {
63
+ max-width: 1200px !important;
64
+ }
65
+ .main-header {
66
+ text-align: center;
67
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
68
+ color: white;
69
+ padding: 40px 20px;
70
+ border-radius: 15px;
71
+ margin-bottom: 30px;
72
+ }
73
+ """
74
+
75
+ # Create the interface
76
+ with gr.Blocks(
77
+ title="๐Ÿค– Awesome Multi-Agent Collaborative Perception",
78
+ theme=gr.themes.Soft(),
79
+ css=custom_css
80
+ ) as demo:
81
+
82
+ # Header
83
+ gr.HTML("""
84
+ <div class="main-header">
85
+ <h1 style="font-size: 2.5rem; margin-bottom: 10px;">๐Ÿค– Awesome Multi-Agent Collaborative Perception</h1>
86
+ <p style="font-size: 1.2rem; opacity: 0.9;">Explore cutting-edge resources for Multi-Agent Collaborative Perception, Prediction, and Planning</p>
87
+ <div style="display: flex; justify-content: center; gap: 30px; margin-top: 20px; flex-wrap: wrap;">
88
+ <div style="background: rgba(255,255,255,0.2); padding: 10px 20px; border-radius: 25px;">
89
+ <div style="font-size: 1.5rem; font-weight: bold;">200+</div>
90
+ <div>Papers</div>
91
+ </div>
92
+ <div style="background: rgba(255,255,255,0.2); padding: 10px 20px; border-radius: 25px;">
93
+ <div style="font-size: 1.5rem; font-weight: bold;">25+</div>
94
+ <div>Datasets</div>
95
+ </div>
96
+ <div style="background: rgba(255,255,255,0.2); padding: 10px 20px; border-radius: 25px;">
97
+ <div style="font-size: 1.5rem; font-weight: bold;">50+</div>
98
+ <div>Code Repos</div>
99
+ </div>
100
+ <div style="background: rgba(255,255,255,0.2); padding: 10px 20px; border-radius: 25px;">
101
+ <div style="font-size: 1.5rem; font-weight: bold;">2025</div>
102
+ <div>Updated</div>
103
+ </div>
104
+ </div>
105
+ </div>
106
+ """)
107
+
108
+ # Main navigation tabs
109
+ with gr.Tabs():
110
+
111
+ with gr.Tab("๐Ÿ” Perception"):
112
+ gr.Markdown("## Multi-Agent Collaborative Perception Papers")
113
+ gr.Markdown("*Latest research in collaborative sensing, 3D object detection, and V2X communication*")
114
+
115
+ # Create paper cards
116
+ papers_html = "".join([create_paper_card(paper) for paper in perception_papers])
117
+ gr.HTML(papers_html)
118
+
119
+ gr.Markdown("""
120
+ ### ๐Ÿ”„ Key Communication Strategies:
121
+ - **Early Fusion**: Raw sensor data sharing
122
+ - **Late Fusion**: Detection-level information exchange
123
+ - **Intermediate Fusion**: Feature-level collaboration
124
+ - **Selective Communication**: Confidence-based data sharing
125
+ """)
126
+
127
+ with gr.Tab("๐Ÿ“Š Datasets"):
128
+ gr.Markdown("## Datasets & Benchmarks")
129
+ gr.Markdown("*Comprehensive collection of real-world and simulation datasets*")
130
+
131
+ gr.Dataframe(
132
+ value=datasets_data,
133
+ headers=["Dataset", "Year", "Type", "Agents", "Size", "Features"],
134
+ datatype=["str", "str", "str", "str", "str", "str"],
135
+ interactive=False
136
+ )
137
+
138
+ gr.Markdown("""
139
+ ### ๐ŸŒŸ Notable Features:
140
+ - **DAIR-V2X**: First real-world V2I collaborative perception dataset with infrastructure sensors
141
+ - **V2V4Real**: Real vehicle-to-vehicle communication dataset collected on highways
142
+ - **TUMTraf-V2X**: Dense annotations for urban collaborative perception scenarios
143
+ - **OPV2V**: Large-scale simulation benchmark built on CARLA platform
144
+ - **V2X-Sim**: Comprehensive multi-agent simulation with customizable scenarios
145
+ """)
146
+
147
+ with gr.Tab("๐Ÿ“ Tracking"):
148
+ gr.Markdown("## Multi-Object Tracking & State Estimation")
149
+ gr.Markdown("*Collaborative tracking across distributed agents with uncertainty quantification*")
150
+
151
+ gr.HTML("""
152
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); gap: 20px; margin: 20px 0;">
153
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; background: white; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
154
+ <h3 style="color: #4ECDC4;">MOT-CUP</h3>
155
+ <p>Multi-Object Tracking with Conformal Uncertainty Propagation</p>
156
+ <a href="https://arxiv.org/abs/2303.14346" target="_blank" style="color: #667eea;">๐Ÿ“„ Paper</a>
157
+ </div>
158
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; background: white; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
159
+ <h3 style="color: #4ECDC4;">DMSTrack</h3>
160
+ <p>Probabilistic 3D Multi-Object Cooperative Tracking (ICRA 2024)</p>
161
+ <a href="https://arxiv.org/abs/2309.14655" target="_blank" style="color: #667eea;">๐Ÿ“„ Paper</a>
162
+ </div>
163
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; background: white; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
164
+ <h3 style="color: #4ECDC4;">CoDynTrust</h3>
165
+ <p>Dynamic Feature Trust for Robust Asynchronous Collaborative Perception (ICRA 2025)</p>
166
+ <a href="https://arxiv.org/abs/2502.08169" target="_blank" style="color: #667eea;">๐Ÿ“„ Paper</a>
167
+ </div>
168
+ </div>
169
+ """)
170
+
171
+ gr.Markdown("""
172
+ ### ๐ŸŽฏ Key Challenges:
173
+ - **Temporal Asynchrony**: Handling different sensor timestamps and communication delays
174
+ - **Uncertainty Quantification**: Reliable confidence estimation across multiple agents
175
+ - **Data Association**: Multi-agent correspondence and track management
176
+ - **Scalability**: Maintaining performance with increasing number of agents
177
+ """)
178
+
179
+ with gr.Tab("๐Ÿ”ฎ Prediction"):
180
+ gr.Markdown("## Trajectory Forecasting & Motion Prediction")
181
+ gr.Markdown("*Cooperative prediction for autonomous systems and multi-agent coordination*")
182
+
183
+ gr.HTML("""
184
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); gap: 20px; margin: 20px 0;">
185
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; background: white; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
186
+ <h3 style="color: #45B7D1;">V2X-Graph</h3>
187
+ <p>Learning Cooperative Trajectory Representations (NeurIPS 2024)</p>
188
+ <a href="https://arxiv.org/abs/2311.00371" target="_blank" style="color: #667eea;">๐Ÿ“„ Paper</a>
189
+ </div>
190
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; background: white; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
191
+ <h3 style="color: #45B7D1;">Co-MTP</h3>
192
+ <p>Cooperative Multi-Temporal Prediction Framework (ICRA 2025)</p>
193
+ <a href="https://arxiv.org/abs/2502.16589" target="_blank" style="color: #667eea;">๐Ÿ“„ Paper</a>
194
+ </div>
195
+ </div>
196
+ """)
197
+
198
+ gr.HTML("""
199
+ <div style="background: #f8f9fa; border-radius: 10px; padding: 20px; margin: 20px 0;">
200
+ <h3>๐Ÿง  Key Approaches:</h3>
201
+ <ul style="line-height: 1.8;">
202
+ <li><strong>Graph Neural Networks</strong>: Modeling agent interactions and social behaviors</li>
203
+ <li><strong>Transformer Architectures</strong>: Attention-based prediction with long-range dependencies</li>
204
+ <li><strong>Multi-Modal Fusion</strong>: Combining LiDAR, camera, and communication data</li>
205
+ <li><strong>Uncertainty Quantification</strong>: Reliable confidence estimation for safety-critical applications</li>
206
+ </ul>
207
+ </div>
208
+ """)
209
+
210
+ with gr.Tab("โš™๏ธ Methods"):
211
+ gr.Markdown("## Methods & Techniques")
212
+ gr.Markdown("*Core methodologies for communication, robustness, and learning in collaborative systems*")
213
+
214
+ with gr.Row():
215
+ with gr.Column():
216
+ gr.Markdown("""
217
+ ### ๐Ÿ“ก Communication Strategies
218
+ - **Bandwidth Optimization**: Compression and selective sharing
219
+ - **Protocol Design**: V2V, V2I, V2X communication standards
220
+ - **Network Topology**: Centralized vs. distributed architectures
221
+ - **Quality of Service**: Latency and reliability management
222
+ """)
223
+
224
+ with gr.Column():
225
+ gr.Markdown("""
226
+ ### ๐Ÿ›ก๏ธ Robustness Approaches
227
+ - **Byzantine Fault Tolerance**: Handling adversarial agents
228
+ - **Uncertainty Handling**: Robust fusion under noise
229
+ - **Privacy Preservation**: Secure multi-party computation
230
+ - **Malicious Agent Detection**: CP-Guard framework (AAAI 2025)
231
+ """)
232
+
233
+ gr.HTML("""
234
+ <div style="background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; border-radius: 10px; padding: 20px; margin: 20px 0;">
235
+ <h3>๐Ÿง  Learning Paradigms</h3>
236
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); gap: 15px; margin-top: 15px;">
237
+ <div>โ€ข <strong>Federated Learning</strong>: Distributed model training</div>
238
+ <div>โ€ข <strong>Transfer Learning</strong>: Cross-domain adaptation</div>
239
+ <div>โ€ข <strong>Meta-Learning</strong>: Quick adaptation to new scenarios</div>
240
+ <div>โ€ข <strong>Heterogeneous Learning</strong>: HEAL framework (ICLR 2024)</div>
241
+ </div>
242
+ </div>
243
+ """)
244
+
245
+ with gr.Tab("๐Ÿ›๏ธ Conferences"):
246
+ gr.Markdown("## Top Venues & Publication Trends")
247
+ gr.Markdown("*Premier conferences and emerging research directions in collaborative perception*")
248
+
249
+ conference_data = [
250
+ ["CVPR 2025", "5+", "End-to-end systems, robustness"],
251
+ ["ICLR 2025", "3+", "Learning representations, scalability"],
252
+ ["AAAI 2025", "4+", "AI applications, defense mechanisms"],
253
+ ["ICRA 2025", "6+", "Robotics applications, real-world deployment"],
254
+ ["NeurIPS 2024", "2+", "Theoretical foundations, novel architectures"]
255
+ ]
256
+
257
+ gr.Dataframe(
258
+ value=conference_data,
259
+ headers=["Conference", "Papers", "Focus Areas"],
260
+ datatype=["str", "str", "str"],
261
+ interactive=False
262
+ )
263
+
264
+ gr.Markdown("""
265
+ ### ๐Ÿ“Š Research Trends (2024-2025):
266
+ - **Communication Efficiency**: 40% increase in bandwidth-aware methods
267
+ - **Robustness & Security**: Emerging focus on adversarial robustness (15+ papers)
268
+ - **Real-World Deployment**: Growing emphasis on practical systems and field tests
269
+ - **Heterogeneous Systems**: Multi-modal and multi-agent diversity becoming standard
270
+ - **End-to-End Learning**: Integration of perception, prediction, and planning
271
+ """)
272
+
273
+ # Footer
274
+ gr.HTML("""
275
+ <div style="text-align: center; margin-top: 40px; padding: 30px; background: #f8f9fa; border-radius: 10px;">
276
+ <h3>๐Ÿค Contributing</h3>
277
+ <p>We welcome contributions! Please submit papers, datasets, and code repositories via GitHub.</p>
278
+ <div style="margin-top: 20px;">
279
+ <a href="https://github.com/your-username/awesome-multi-agent-collaborative-perception" target="_blank"
280
+ style="background: #667eea; color: white; padding: 10px 20px; border-radius: 5px; text-decoration: none; margin: 5px;">
281
+ ๐Ÿ“š GitHub Repository
282
+ </a>
283
+ <a href="https://huggingface.co/spaces/your-username/awesome-multi-agent-collaborative-perception" target="_blank"
284
+ style="background: #ff6b6b; color: white; padding: 10px 20px; border-radius: 5px; text-decoration: none; margin: 5px;">
285
+ ๐Ÿค— Hugging Face Space
286
+ </a>
287
+ </div>
288
+ <p style="margin-top: 20px; color: #666;">Made with โค๏ธ for the Collaborative Perception Community</p>
289
+ </div>
290
+ """)
291
+
292
+ if __name__ == "__main__":
293
+ demo.launch()
collaborative-perception.html ADDED
@@ -0,0 +1,1138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>๐Ÿค– Awesome Multi-Agent Collaborative Perception</title>
7
+ <link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" rel="stylesheet">
8
+ <style>
9
+ * {
10
+ margin: 0;
11
+ padding: 0;
12
+ box-sizing: border-box;
13
+ }
14
+
15
+ body {
16
+ font-family: 'Inter', -apple-system, BlinkMacSystemFont, sans-serif;
17
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
18
+ min-height: 100vh;
19
+ color: #333;
20
+ }
21
+
22
+ .container {
23
+ max-width: 1400px;
24
+ margin: 0 auto;
25
+ padding: 20px;
26
+ }
27
+
28
+ .header {
29
+ text-align: center;
30
+ margin-bottom: 40px;
31
+ color: white;
32
+ }
33
+
34
+ .header h1 {
35
+ font-size: 3rem;
36
+ font-weight: 700;
37
+ margin-bottom: 10px;
38
+ text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
39
+ }
40
+
41
+ .header p {
42
+ font-size: 1.2rem;
43
+ opacity: 0.9;
44
+ margin-bottom: 20px;
45
+ }
46
+
47
+ .stats-bar {
48
+ display: flex;
49
+ justify-content: center;
50
+ gap: 30px;
51
+ margin: 20px 0;
52
+ flex-wrap: wrap;
53
+ }
54
+
55
+ .stat-badge {
56
+ background: rgba(255,255,255,0.2);
57
+ padding: 10px 20px;
58
+ border-radius: 25px;
59
+ color: white;
60
+ font-weight: bold;
61
+ text-align: center;
62
+ }
63
+
64
+ .stat-number {
65
+ font-size: 1.5rem;
66
+ display: block;
67
+ }
68
+
69
+ .main-sections {
70
+ display: grid;
71
+ grid-template-columns: repeat(auto-fit, minmax(350px, 1fr));
72
+ gap: 30px;
73
+ margin-bottom: 40px;
74
+ }
75
+
76
+ .section-card {
77
+ background: rgba(255,255,255,0.95);
78
+ border-radius: 20px;
79
+ padding: 40px;
80
+ cursor: pointer;
81
+ transition: all 0.3s ease;
82
+ text-align: center;
83
+ box-shadow: 0 10px 30px rgba(0,0,0,0.1);
84
+ position: relative;
85
+ overflow: hidden;
86
+ }
87
+
88
+ .section-card::before {
89
+ content: '';
90
+ position: absolute;
91
+ top: 0;
92
+ left: -100%;
93
+ width: 100%;
94
+ height: 100%;
95
+ background: linear-gradient(90deg, transparent, rgba(255,255,255,0.4), transparent);
96
+ transition: left 0.5s ease;
97
+ }
98
+
99
+ .section-card:hover::before {
100
+ left: 100%;
101
+ }
102
+
103
+ .section-card:hover {
104
+ transform: translateY(-10px);
105
+ box-shadow: 0 20px 40px rgba(0,0,0,0.15);
106
+ }
107
+
108
+ .section-icon {
109
+ font-size: 4rem;
110
+ margin-bottom: 20px;
111
+ color: #667eea;
112
+ }
113
+
114
+ .perception-card .section-icon { color: #FF6B6B; }
115
+ .tracking-card .section-icon { color: #4ECDC4; }
116
+ .prediction-card .section-icon { color: #45B7D1; }
117
+ .datasets-card .section-icon { color: #96CEB4; }
118
+ .methods-card .section-icon { color: #FECA57; }
119
+ .conferences-card .section-icon { color: #A55EEA; }
120
+
121
+ .section-card h2 {
122
+ font-size: 1.8rem;
123
+ margin-bottom: 15px;
124
+ color: #333;
125
+ }
126
+
127
+ .section-card p {
128
+ color: #666;
129
+ font-size: 1rem;
130
+ line-height: 1.6;
131
+ margin-bottom: 20px;
132
+ }
133
+
134
+ .stats {
135
+ display: flex;
136
+ justify-content: space-around;
137
+ margin-top: 20px;
138
+ }
139
+
140
+ .stat {
141
+ text-align: center;
142
+ }
143
+
144
+ .stat-number {
145
+ font-size: 1.3rem;
146
+ font-weight: bold;
147
+ color: #667eea;
148
+ }
149
+
150
+ .stat-label {
151
+ font-size: 0.8rem;
152
+ color: #666;
153
+ }
154
+
155
+ .content-panel {
156
+ display: none;
157
+ background: rgba(255,255,255,0.95);
158
+ border-radius: 20px;
159
+ padding: 30px;
160
+ margin-top: 20px;
161
+ box-shadow: 0 10px 30px rgba(0,0,0,0.1);
162
+ }
163
+
164
+ .content-panel.active {
165
+ display: block;
166
+ animation: slideIn 0.3s ease;
167
+ }
168
+
169
+ @keyframes slideIn {
170
+ from { opacity: 0; transform: translateY(20px); }
171
+ to { opacity: 1; transform: translateY(0); }
172
+ }
173
+
174
+ .panel-header {
175
+ display: flex;
176
+ justify-content: space-between;
177
+ align-items: center;
178
+ margin-bottom: 30px;
179
+ padding-bottom: 15px;
180
+ border-bottom: 2px solid #eee;
181
+ }
182
+
183
+ .panel-title {
184
+ font-size: 2rem;
185
+ color: #333;
186
+ }
187
+
188
+ .close-btn {
189
+ background: #ff4757;
190
+ color: white;
191
+ border: none;
192
+ border-radius: 50%;
193
+ width: 40px;
194
+ height: 40px;
195
+ cursor: pointer;
196
+ font-size: 1.2rem;
197
+ transition: all 0.3s ease;
198
+ }
199
+
200
+ .close-btn:hover {
201
+ background: #ff3838;
202
+ transform: scale(1.1);
203
+ }
204
+
205
+ .papers-grid {
206
+ display: grid;
207
+ grid-template-columns: repeat(auto-fit, minmax(400px, 1fr));
208
+ gap: 20px;
209
+ margin-bottom: 30px;
210
+ }
211
+
212
+ .paper-item {
213
+ background: #f8f9fa;
214
+ border-radius: 15px;
215
+ padding: 20px;
216
+ border-left: 4px solid #667eea;
217
+ transition: all 0.3s ease;
218
+ }
219
+
220
+ .paper-item:hover {
221
+ transform: translateX(5px);
222
+ background: #e3f2fd;
223
+ box-shadow: 0 5px 15px rgba(0,0,0,0.1);
224
+ }
225
+
226
+ .paper-venue {
227
+ background: #667eea;
228
+ color: white;
229
+ padding: 4px 12px;
230
+ border-radius: 15px;
231
+ font-size: 0.8rem;
232
+ font-weight: bold;
233
+ display: inline-block;
234
+ margin-bottom: 10px;
235
+ }
236
+
237
+ .paper-title {
238
+ font-size: 1.1rem;
239
+ font-weight: 600;
240
+ color: #333;
241
+ margin-bottom: 8px;
242
+ }
243
+
244
+ .paper-description {
245
+ color: #666;
246
+ font-size: 0.9rem;
247
+ line-height: 1.4;
248
+ margin-bottom: 15px;
249
+ }
250
+
251
+ .paper-links {
252
+ display: flex;
253
+ gap: 10px;
254
+ flex-wrap: wrap;
255
+ }
256
+
257
+ .link-btn {
258
+ background: linear-gradient(45deg, #667eea, #764ba2);
259
+ color: white;
260
+ border: none;
261
+ padding: 6px 12px;
262
+ border-radius: 15px;
263
+ cursor: pointer;
264
+ font-size: 0.8rem;
265
+ text-decoration: none;
266
+ display: inline-flex;
267
+ align-items: center;
268
+ gap: 5px;
269
+ transition: all 0.3s ease;
270
+ }
271
+
272
+ .link-btn:hover {
273
+ transform: translateY(-2px);
274
+ box-shadow: 0 5px 15px rgba(102, 126, 234, 0.4);
275
+ }
276
+
277
+ .link-btn.code { background: linear-gradient(45deg, #4ECDC4, #44A08D); }
278
+ .link-btn.project { background: linear-gradient(45deg, #FF6B6B, #ee5a52); }
279
+
280
+ .search-container {
281
+ margin-bottom: 30px;
282
+ }
283
+
284
+ .search-box {
285
+ width: 100%;
286
+ max-width: 500px;
287
+ margin: 0 auto;
288
+ display: block;
289
+ padding: 15px 20px;
290
+ border: none;
291
+ border-radius: 25px;
292
+ font-size: 1rem;
293
+ box-shadow: 0 5px 15px rgba(0,0,0,0.1);
294
+ outline: none;
295
+ }
296
+
297
+ .filter-buttons {
298
+ display: flex;
299
+ justify-content: center;
300
+ gap: 10px;
301
+ margin: 20px 0;
302
+ flex-wrap: wrap;
303
+ }
304
+
305
+ .filter-btn {
306
+ background: rgba(255,255,255,0.9);
307
+ border: 2px solid #667eea;
308
+ color: #667eea;
309
+ padding: 8px 16px;
310
+ border-radius: 20px;
311
+ cursor: pointer;
312
+ transition: all 0.3s ease;
313
+ }
314
+
315
+ .filter-btn.active,
316
+ .filter-btn:hover {
317
+ background: #667eea;
318
+ color: white;
319
+ }
320
+
321
+ .back-to-top {
322
+ position: fixed;
323
+ bottom: 30px;
324
+ right: 30px;
325
+ background: #667eea;
326
+ color: white;
327
+ border: none;
328
+ border-radius: 50%;
329
+ width: 50px;
330
+ height: 50px;
331
+ cursor: pointer;
332
+ font-size: 1.2rem;
333
+ box-shadow: 0 5px 15px rgba(0,0,0,0.2);
334
+ transition: all 0.3s ease;
335
+ opacity: 0;
336
+ visibility: hidden;
337
+ }
338
+
339
+ .back-to-top.visible {
340
+ opacity: 1;
341
+ visibility: visible;
342
+ }
343
+
344
+ .back-to-top:hover {
345
+ transform: translateY(-3px);
346
+ background: #5a67d8;
347
+ }
348
+
349
+ @media (max-width: 768px) {
350
+ .main-sections {
351
+ grid-template-columns: 1fr;
352
+ gap: 20px;
353
+ }
354
+
355
+ .header h1 {
356
+ font-size: 2rem;
357
+ }
358
+
359
+ .papers-grid {
360
+ grid-template-columns: 1fr;
361
+ }
362
+
363
+ .section-card {
364
+ padding: 30px 20px;
365
+ }
366
+
367
+ .stats-bar {
368
+ gap: 15px;
369
+ }
370
+
371
+ .stat-badge {
372
+ padding: 8px 15px;
373
+ font-size: 0.9rem;
374
+ }
375
+ }
376
+
377
+ .dataset-table {
378
+ width: 100%;
379
+ border-collapse: collapse;
380
+ margin: 20px 0;
381
+ background: white;
382
+ border-radius: 10px;
383
+ overflow: hidden;
384
+ box-shadow: 0 5px 15px rgba(0,0,0,0.1);
385
+ }
386
+
387
+ .dataset-table th,
388
+ .dataset-table td {
389
+ padding: 12px 15px;
390
+ text-align: left;
391
+ border-bottom: 1px solid #eee;
392
+ }
393
+
394
+ .dataset-table th {
395
+ background: #667eea;
396
+ color: white;
397
+ font-weight: 600;
398
+ }
399
+
400
+ .dataset-table tr:hover {
401
+ background: #f8f9fa;
402
+ }
403
+
404
+ .tag {
405
+ background: #e3f2fd;
406
+ color: #1976d2;
407
+ padding: 2px 8px;
408
+ border-radius: 10px;
409
+ font-size: 0.8rem;
410
+ margin: 2px;
411
+ display: inline-block;
412
+ }
413
+ </style>
414
+ </head>
415
+ <body>
416
+ <div class="container">
417
+ <div class="header">
418
+ <h1><i class="fas fa-robot"></i> Awesome Multi-Agent Collaborative Perception</h1>
419
+ <p>Explore cutting-edge resources for Multi-Agent Collaborative Perception, Prediction, and Planning</p>
420
+
421
+ <div class="stats-bar">
422
+ <div class="stat-badge">
423
+ <span class="stat-number">200+</span>
424
+ <span>Papers</span>
425
+ </div>
426
+ <div class="stat-badge">
427
+ <span class="stat-number">25+</span>
428
+ <span>Datasets</span>
429
+ </div>
430
+ <div class="stat-badge">
431
+ <span class="stat-number">50+</span>
432
+ <span>Code Repos</span>
433
+ </div>
434
+ <div class="stat-badge">
435
+ <span class="stat-number">2025</span>
436
+ <span>Updated</span>
437
+ </div>
438
+ </div>
439
+ </div>
440
+
441
+ <div class="main-sections">
442
+ <div class="section-card perception-card" onclick="showContent('perception')">
443
+ <div class="section-icon">
444
+ <i class="fas fa-eye"></i>
445
+ </div>
446
+ <h2>๐Ÿ” Perception</h2>
447
+ <p>Multi-agent collaborative sensing, 3D object detection, semantic segmentation, and sensor fusion techniques for enhanced environmental understanding.</p>
448
+ <div class="stats">
449
+ <div class="stat">
450
+ <div class="stat-number">80+</div>
451
+ <div class="stat-label">Papers</div>
452
+ </div>
453
+ <div class="stat">
454
+ <div class="stat-number">V2X</div>
455
+ <div class="stat-label">Focus</div>
456
+ </div>
457
+ <div class="stat">
458
+ <div class="stat-number">15+</div>
459
+ <div class="stat-label">Venues</div>
460
+ </div>
461
+ </div>
462
+ </div>
463
+
464
+ <div class="section-card tracking-card" onclick="showContent('tracking')">
465
+ <div class="section-icon">
466
+ <i class="fas fa-route"></i>
467
+ </div>
468
+ <h2>๐Ÿ“ Tracking</h2>
469
+ <p>Multi-object tracking, collaborative state estimation, uncertainty quantification, and temporal consistency across multiple agents.</p>
470
+ <div class="stats">
471
+ <div class="stat">
472
+ <div class="stat-number">15+</div>
473
+ <div class="stat-label">Methods</div>
474
+ </div>
475
+ <div class="stat">
476
+ <div class="stat-number">MOT</div>
477
+ <div class="stat-label">Focus</div>
478
+ </div>
479
+ <div class="stat">
480
+ <div class="stat-number">5+</div>
481
+ <div class="stat-label">Datasets</div>
482
+ </div>
483
+ </div>
484
+ </div>
485
+
486
+ <div class="section-card prediction-card" onclick="showContent('prediction')">
487
+ <div class="section-icon">
488
+ <i class="fas fa-chart-line"></i>
489
+ </div>
490
+ <h2>๐Ÿ”ฎ Prediction</h2>
491
+ <p>Trajectory forecasting, motion prediction, behavior understanding, and cooperative planning for autonomous systems.</p>
492
+ <div class="stats">
493
+ <div class="stat">
494
+ <div class="stat-number">25+</div>
495
+ <div class="stat-label">Papers</div>
496
+ </div>
497
+ <div class="stat">
498
+ <div class="stat-number">GNN</div>
499
+ <div class="stat-label">Core Tech</div>
500
+ </div>
501
+ <div class="stat">
502
+ <div class="stat-number">E2E</div>
503
+ <div class="stat-label">Systems</div>
504
+ </div>
505
+ </div>
506
+ </div>
507
+
508
+ <div class="section-card datasets-card" onclick="showContent('datasets')">
509
+ <div class="section-icon">
510
+ <i class="fas fa-database"></i>
511
+ </div>
512
+ <h2>๐Ÿ“Š Datasets</h2>
513
+ <p>Real-world and simulated datasets for collaborative perception research, including benchmarks and evaluation protocols.</p>
514
+ <div class="stats">
515
+ <div class="stat">
516
+ <div class="stat-number">25+</div>
517
+ <div class="stat-label">Datasets</div>
518
+ </div>
519
+ <div class="stat">
520
+ <div class="stat-number">Real</div>
521
+ <div class="stat-label">& Sim</div>
522
+ </div>
523
+ <div class="stat">
524
+ <div class="stat-number">3D</div>
525
+ <div class="stat-label">Labels</div>
526
+ </div>
527
+ </div>
528
+ </div>
529
+
530
+ <div class="section-card methods-card" onclick="showContent('methods')">
531
+ <div class="section-icon">
532
+ <i class="fas fa-cogs"></i>
533
+ </div>
534
+ <h2>โš™๏ธ Methods</h2>
535
+ <p>Communication strategies, fusion techniques, robustness approaches, and learning paradigms for multi-agent systems.</p>
536
+ <div class="stats">
537
+ <div class="stat">
538
+ <div class="stat-number">60+</div>
539
+ <div class="stat-label">Methods</div>
540
+ </div>
541
+ <div class="stat">
542
+ <div class="stat-number">Comm</div>
543
+ <div class="stat-label">Efficient</div>
544
+ </div>
545
+ <div class="stat">
546
+ <div class="stat-number">Robust</div>
547
+ <div class="stat-label">Defense</div>
548
+ </div>
549
+ </div>
550
+ </div>
551
+
552
+ <div class="section-card conferences-card" onclick="showContent('conferences')">
553
+ <div class="section-icon">
554
+ <i class="fas fa-university"></i>
555
+ </div>
556
+ <h2>๐Ÿ›๏ธ Conferences</h2>
557
+ <p>Top-tier venues, workshops, and publication trends in collaborative perception and multi-agent systems research.</p>
558
+ <div class="stats">
559
+ <div class="stat">
560
+ <div class="stat-number">10+</div>
561
+ <div class="stat-label">Venues</div>
562
+ </div>
563
+ <div class="stat">
564
+ <div class="stat-number">2025</div>
565
+ <div class="stat-label">Latest</div>
566
+ </div>
567
+ <div class="stat">
568
+ <div class="stat-number">Trend</div>
569
+ <div class="stat-label">Analysis</div>
570
+ </div>
571
+ </div>
572
+ </div>
573
+ </div>
574
+
575
+ <!-- Content Panels -->
576
+ <div id="perceptionPanel" class="content-panel">
577
+ <div class="panel-header">
578
+ <h2 class="panel-title"><i class="fas fa-eye"></i> Collaborative Perception</h2>
579
+ <button class="close-btn" onclick="hideContent()">
580
+ <i class="fas fa-times"></i>
581
+ </button>
582
+ </div>
583
+
584
+ <div class="search-container">
585
+ <input type="text" class="search-box" placeholder="Search perception papers..." onkeyup="filterPapers('perception')">
586
+ </div>
587
+
588
+ <div class="filter-buttons">
589
+ <button class="filter-btn active" onclick="filterByVenue('perception', 'all')">All</button>
590
+ <button class="filter-btn" onclick="filterByVenue('perception', 'CVPR 2025')">CVPR 2025</button>
591
+ <button class="filter-btn" onclick="filterByVenue('perception', 'ICLR 2025')">ICLR 2025</button>
592
+ <button class="filter-btn" onclick="filterByVenue('perception', 'AAAI 2025')">AAAI 2025</button>
593
+ <button class="filter-btn" onclick="filterByVenue('perception', 'NeurIPS')">NeurIPS</button>
594
+ </div>
595
+
596
+ <div id="perceptionPapers" class="papers-grid">
597
+ <!-- Papers will be populated by JavaScript -->
598
+ </div>
599
+ </div>
600
+
601
+ <div id="trackingPanel" class="content-panel">
602
+ <div class="panel-header">
603
+ <h2 class="panel-title"><i class="fas fa-route"></i> Collaborative Tracking</h2>
604
+ <button class="close-btn" onclick="hideContent()">
605
+ <i class="fas fa-times"></i>
606
+ </button>
607
+ </div>
608
+
609
+ <div class="search-container">
610
+ <input type="text" class="search-box" placeholder="Search tracking papers..." onkeyup="filterPapers('tracking')">
611
+ </div>
612
+
613
+ <div id="trackingPapers" class="papers-grid">
614
+ <!-- Papers will be populated by JavaScript -->
615
+ </div>
616
+ </div>
617
+
618
+ <div id="predictionPanel" class="content-panel">
619
+ <div class="panel-header">
620
+ <h2 class="panel-title"><i class="fas fa-chart-line"></i> Collaborative Prediction</h2>
621
+ <button class="close-btn" onclick="hideContent()">
622
+ <i class="fas fa-times"></i>
623
+ </button>
624
+ </div>
625
+
626
+ <div class="search-container">
627
+ <input type="text" class="search-box" placeholder="Search prediction papers..." onkeyup="filterPapers('prediction')">
628
+ </div>
629
+
630
+ <div id="predictionPapers" class="papers-grid">
631
+ <!-- Papers will be populated by JavaScript -->
632
+ </div>
633
+ </div>
634
+
635
+ <div id="datasetsPanel" class="content-panel">
636
+ <div class="panel-header">
637
+ <h2 class="panel-title"><i class="fas fa-database"></i> Datasets & Benchmarks</h2>
638
+ <button class="close-btn" onclick="hideContent()">
639
+ <i class="fas fa-times"></i>
640
+ </button>
641
+ </div>
642
+
643
+ <div class="search-container">
644
+ <input type="text" class="search-box" placeholder="Search datasets..." onkeyup="filterDatasets()">
645
+ </div>
646
+
647
+ <div class="filter-buttons">
648
+ <button class="filter-btn active" onclick="filterDatasetType('all')">All</button>
649
+ <button class="filter-btn" onclick="filterDatasetType('real')">Real-World</button>
650
+ <button class="filter-btn" onclick="filterDatasetType('simulation')">Simulation</button>
651
+ <button class="filter-btn" onclick="filterDatasetType('v2x')">V2X</button>
652
+ </div>
653
+
654
+ <table class="dataset-table" id="datasetsTable">
655
+ <thead>
656
+ <tr>
657
+ <th>Dataset</th>
658
+ <th>Year</th>
659
+ <th>Type</th>
660
+ <th>Agents</th>
661
+ <th>Size</th>
662
+ <th>Features</th>
663
+ <th>Access</th>
664
+ </tr>
665
+ </thead>
666
+ <tbody>
667
+ <!-- Dataset rows will be populated by JavaScript -->
668
+ </tbody>
669
+ </table>
670
+ </div>
671
+
672
+ <div id="methodsPanel" class="content-panel">
673
+ <div class="panel-header">
674
+ <h2 class="panel-title"><i class="fas fa-cogs"></i> Methods & Techniques</h2>
675
+ <button class="close-btn" onclick="hideContent()">
676
+ <i class="fas fa-times"></i>
677
+ </button>
678
+ </div>
679
+
680
+ <div class="search-container">
681
+ <input type="text" class="search-box" placeholder="Search methods..." onkeyup="filterPapers('methods')">
682
+ </div>
683
+
684
+ <div class="filter-buttons">
685
+ <button class="filter-btn active" onclick="filterMethodType('all')">All</button>
686
+ <button class="filter-btn" onclick="filterMethodType('communication')">Communication</button>
687
+ <button class="filter-btn" onclick="filterMethodType('robustness')">Robustness</button>
688
+ <button class="filter-btn" onclick="filterMethodType('learning')">Learning</button>
689
+ </div>
690
+
691
+ <div id="methodsPapers" class="papers-grid">
692
+ <!-- Methods will be populated by JavaScript -->
693
+ </div>
694
+ </div>
695
+
696
+ <div id="conferencesPanel" class="content-panel">
697
+ <div class="panel-header">
698
+ <h2 class="panel-title"><i class="fas fa-university"></i> Conferences & Venues</h2>
699
+ <button class="close-btn" onclick="hideContent()">
700
+ <i class="fas fa-times"></i>
701
+ </button>
702
+ </div>
703
+
704
+ <div id="conferencesContent">
705
+ <!-- Conference content will be populated by JavaScript -->
706
+ </div>
707
+ </div>
708
+ </div>
709
+
710
+ <button class="back-to-top" id="backToTop" onclick="scrollToTop()">
711
+ <i class="fas fa-arrow-up"></i>
712
+ </button>
713
+
714
+ <script>
715
+ // Sample data - in a real implementation, this would come from your data sources
716
+ const perceptionPapers = [
717
+ {
718
+ title: "CoSDH: Communication-Efficient Collaborative Perception via Supply-Demand Awareness",
719
+ venue: "CVPR 2025",
720
+ description: "Novel approach for efficient collaborative perception using supply-demand awareness and intermediate-late hybridization.",
721
+ paper: "https://arxiv.org/abs/2503.03430",
722
+ code: "https://github.com/Xu2729/CoSDH",
723
+ project: null
724
+ },
725
+ {
726
+ title: "V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection",
727
+ venue: "CVPR 2025",
728
+ description: "Cooperative fusion of LiDAR and 4D radar sensors for enhanced 3D object detection with denoising diffusion.",
729
+ paper: "https://arxiv.org/abs/2411.08402",
730
+ code: "https://github.com/ylwhxht/V2X-R",
731
+ project: null
732
+ },
733
+ {
734
+ title: "STAMP: Scalable Task- And Model-Agnostic Collaborative Perception",
735
+ venue: "ICLR 2025",
736
+ description: "Framework for scalable collaborative perception that is both task and model agnostic.",
737
+ paper: "https://openreview.net/forum?id=8NdNniulYE",
738
+ code: "https://github.com/taco-group/STAMP",
739
+ project: null
740
+ },
741
+ {
742
+ title: "Where2comm: Efficient Collaborative Perception via Spatial Confidence Maps",
743
+ venue: "NeurIPS 2022",
744
+ description: "Groundbreaking work on efficient collaborative perception using spatial confidence maps for selective communication.",
745
+ paper: "https://openreview.net/forum?id=dLL4KXzKUpS",
746
+ code: "https://github.com/MediaBrain-SJTU/where2comm",
747
+ project: null
748
+ },
749
+ {
750
+ title: "CoBEVFlow: Robust Asynchronous Collaborative 3D Detection via Bird's Eye View Flow",
751
+ venue: "NeurIPS 2023",
752
+ description: "Handles temporal asynchrony in collaborative perception using bird's eye view flow.",
753
+ paper: "https://openreview.net/forum?id=UHIDdtxmVS",
754
+ code: "https://github.com/MediaBrain-SJTU/CoBEVFlow",
755
+ project: null
756
+ },
757
+ {
758
+ title: "UniV2X: End-to-End Autonomous Driving through V2X Cooperation",
759
+ venue: "AAAI 2025",
760
+ description: "Complete end-to-end system for autonomous driving with V2X cooperation.",
761
+ paper: "https://arxiv.org/abs/2404.00717",
762
+ code: "https://github.com/AIR-THU/UniV2X",
763
+ project: null
764
+ }
765
+ ];
766
+
767
+ const trackingPapers = [
768
+ {
769
+ title: "MOT-CUP: Multi-Object Tracking with Conformal Uncertainty Propagation",
770
+ venue: "Preprint",
771
+ description: "Collaborative multi-object tracking with conformal uncertainty propagation for robust state estimation.",
772
+ paper: "https://arxiv.org/abs/2303.14346",
773
+ code: "https://github.com/susanbao/mot_cup",
774
+ project: null
775
+ },
776
+ {
777
+ title: "DMSTrack: Probabilistic 3D Multi-Object Cooperative Tracking",
778
+ venue: "ICRA 2024",
779
+ description: "Probabilistic approach for 3D multi-object cooperative tracking using differentiable multi-sensor Kalman filter.",
780
+ paper: "https://arxiv.org/abs/2309.14655",
781
+ code: "https://github.com/eddyhkchiu/DMSTrack",
782
+ project: null
783
+ },
784
+ {
785
+ title: "CoDynTrust: Robust Asynchronous Collaborative Perception via Dynamic Feature Trust",
786
+ venue: "ICRA 2025",
787
+ description: "Dynamic feature trust modulus for robust asynchronous collaborative perception.",
788
+ paper: "https://arxiv.org/abs/2502.08169",
789
+ code: "https://github.com/CrazyShout/CoDynTrust",
790
+ project: null
791
+ }
792
+ ];
793
+
794
+ const predictionPapers = [
795
+ {
796
+ title: "V2X-Graph: Learning Cooperative Trajectory Representations",
797
+ venue: "NeurIPS 2024",
798
+ description: "Graph neural networks for learning cooperative trajectory representations in multi-agent systems.",
799
+ paper: "https://arxiv.org/abs/2311.00371",
800
+ code: "https://github.com/AIR-THU/V2X-Graph",
801
+ project: null
802
+ },
803
+ {
804
+ title: "Co-MTP: Cooperative Trajectory Prediction Framework",
805
+ venue: "ICRA 2025",
806
+ description: "Multi-temporal fusion framework for cooperative trajectory prediction in autonomous driving.",
807
+ paper: "https://arxiv.org/abs/2502.16589",
808
+ code: "https://github.com/xiaomiaozhang/Co-MTP",
809
+ project: null
810
+ },
811
+ {
812
+ title: "V2XPnP: Vehicle-to-Everything Spatio-Temporal Fusion",
813
+ venue: "Preprint",
814
+ description: "Spatio-temporal fusion approach for multi-agent perception and prediction in V2X systems.",
815
+ paper: "https://arxiv.org/abs/2412.01812",
816
+ code: "https://github.com/Zewei-Zhou/V2XPnP",
817
+ project: null
818
+ }
819
+ ];
820
+
821
+ const datasets = [
822
+ {
823
+ name: "DAIR-V2X",
824
+ year: "2022",
825
+ type: "real",
826
+ agents: "V2I",
827
+ size: "71K frames",
828
+ features: ["3D boxes", "Multi-modal", "Infrastructure"],
829
+ link: "https://github.com/AIR-THU/DAIR-V2X"
830
+ },
831
+ {
832
+ name: "V2V4Real",
833
+ year: "2023",
834
+ type: "real",
835
+ agents: "V2V",
836
+ size: "20K frames",
837
+ features: ["3D boxes", "Real V2V", "Highway"],
838
+ link: "https://github.com/ucla-mobility/V2V4Real"
839
+ },
840
+ {
841
+ name: "TUMTraf-V2X",
842
+ year: "2024",
843
+ type: "real",
844
+ agents: "V2X",
845
+ size: "2K sequences",
846
+ features: ["Dense labels", "Cooperative", "Urban"],
847
+ link: "https://github.com/tum-traffic-dataset/tum-traffic-dataset-dev-kit"
848
+ },
849
+ {
850
+ name: "OPV2V",
851
+ year: "2022",
852
+ type: "simulation",
853
+ agents: "V2V",
854
+ size: "Large-scale",
855
+ features: ["CARLA", "Multi-agent", "Benchmark"],
856
+ link: "https://github.com/DerrickXuNu/OpenCOOD"
857
+ },
858
+ {
859
+ name: "V2X-Sim",
860
+ year: "2021",
861
+ type: "simulation",
862
+ agents: "Multi",
863
+ size: "Scalable",
864
+ features: ["Multi-agent", "Collaborative", "Synthetic"],
865
+ link: "https://github.com/ai4ce/V2X-Sim"
866
+ }
867
+ ];
868
+
869
+ const methodsPapers = [
870
+ {
871
+ title: "ACCO: Is Discretization Fusion All You Need?",
872
+ venue: "Preprint",
873
+ description: "Investigation of discretization fusion techniques for collaborative perception efficiency.",
874
+ paper: "https://arxiv.org/abs/2503.13946",
875
+ code: "https://github.com/sidiangongyuan/ACCO",
876
+ project: null,
877
+ category: "communication"
878
+ },
879
+ {
880
+ title: "CP-Guard: Malicious Agent Detection and Defense",
881
+ venue: "AAAI 2025",
882
+ description: "Comprehensive framework for detecting and defending against malicious agents in collaborative perception.",
883
+ paper: "https://arxiv.org/abs/2412.12000",
884
+ code: null,
885
+ project: null,
886
+ category: "robustness"
887
+ },
888
+ {
889
+ title: "HEAL: Extensible Framework for Heterogeneous Collaborative Perception",
890
+ venue: "ICLR 2024",
891
+ description: "Open framework for heterogeneous collaborative perception with extensive customization options.",
892
+ paper: "https://openreview.net/forum?id=KkrDUGIASk",
893
+ code: "https://github.com/yifanlu0227/HEAL",
894
+ project: null,
895
+ category: "learning"
896
+ }
897
+ ];
898
+
899
+ function showContent(section) {
900
+ // Hide all panels
901
+ document.querySelectorAll('.content-panel').forEach(panel => {
902
+ panel.classList.remove('active');
903
+ });
904
+
905
+ // Show selected panel
906
+ document.getElementById(section + 'Panel').classList.add('active');
907
+
908
+ // Populate content based on section
909
+ if (section === 'perception') {
910
+ populatePapers('perceptionPapers', perceptionPapers);
911
+ } else if (section === 'tracking') {
912
+ populatePapers('trackingPapers', trackingPapers);
913
+ } else if (section === 'prediction') {
914
+ populatePapers('predictionPapers', predictionPapers);
915
+ } else if (section === 'datasets') {
916
+ populateDatasets();
917
+ } else if (section === 'methods') {
918
+ populatePapers('methodsPapers', methodsPapers);
919
+ } else if (section === 'conferences') {
920
+ populateConferences();
921
+ }
922
+
923
+ // Smooth scroll to panel
924
+ setTimeout(() => {
925
+ document.querySelector('.content-panel.active').scrollIntoView({
926
+ behavior: 'smooth',
927
+ block: 'start'
928
+ });
929
+ }, 100);
930
+ }
931
+
932
+ function hideContent() {
933
+ document.querySelectorAll('.content-panel').forEach(panel => {
934
+ panel.classList.remove('active');
935
+ });
936
+
937
+ // Scroll back to top
938
+ window.scrollTo({ top: 0, behavior: 'smooth' });
939
+ }
940
+
941
+ function populatePapers(containerId, papers) {
942
+ const container = document.getElementById(containerId);
943
+ let html = '';
944
+
945
+ papers.forEach(paper => {
946
+ html += `
947
+ <div class="paper-item" data-venue="${paper.venue}">
948
+ <div class="paper-venue">${paper.venue}</div>
949
+ <div class="paper-title">${paper.title}</div>
950
+ <div class="paper-description">${paper.description}</div>
951
+ <div class="paper-links">
952
+ <a href="${paper.paper}" class="link-btn" target="_blank">
953
+ <i class="fas fa-file-alt"></i> Paper
954
+ </a>
955
+ ${paper.code ? `<a href="${paper.code}" class="link-btn code" target="_blank">
956
+ <i class="fas fa-code"></i> Code
957
+ </a>` : ''}
958
+ ${paper.project ? `<a href="${paper.project}" class="link-btn project" target="_blank">
959
+ <i class="fas fa-globe"></i> Project
960
+ </a>` : ''}
961
+ </div>
962
+ </div>
963
+ `;
964
+ });
965
+
966
+ container.innerHTML = html;
967
+ }
968
+
969
+ function populateDatasets() {
970
+ const tbody = document.querySelector('#datasetsTable tbody');
971
+ let html = '';
972
+
973
+ datasets.forEach(dataset => {
974
+ const featureTags = dataset.features.map(feature =>
975
+ `<span class="tag">${feature}</span>`
976
+ ).join('');
977
+
978
+ html += `
979
+ <tr data-type="${dataset.type}" data-agents="${dataset.agents.toLowerCase()}">
980
+ <td><strong>${dataset.name}</strong></td>
981
+ <td>${dataset.year}</td>
982
+ <td>${dataset.type === 'real' ? '๐ŸŒ Real' : '๐ŸŽฎ Simulation'}</td>
983
+ <td>${dataset.agents}</td>
984
+ <td>${dataset.size}</td>
985
+ <td>${featureTags}</td>
986
+ <td><a href="${dataset.link}" class="link-btn" target="_blank">
987
+ <i class="fas fa-download"></i> Access
988
+ </a></td>
989
+ </tr>
990
+ `;
991
+ });
992
+
993
+ tbody.innerHTML = html;
994
+ }
995
+
996
+ function populateConferences() {
997
+ const container = document.getElementById('conferencesContent');
998
+ container.innerHTML = `
999
+ <div class="papers-grid">
1000
+ <div class="paper-item">
1001
+ <div class="paper-venue">CVPR 2025</div>
1002
+ <div class="paper-title">Computer Vision and Pattern Recognition</div>
1003
+ <div class="paper-description">5+ collaborative perception papers accepted. Focus on end-to-end systems and robustness.</div>
1004
+ <div class="paper-links">
1005
+ <a href="#" class="link-btn">
1006
+ <i class="fas fa-external-link-alt"></i> Conference
1007
+ </a>
1008
+ </div>
1009
+ </div>
1010
+
1011
+ <div class="paper-item">
1012
+ <div class="paper-venue">ICLR 2025</div>
1013
+ <div class="paper-title">International Conference on Learning Representations</div>
1014
+ <div class="paper-description">3+ papers accepted. Focus on learning representations and scalability frameworks.</div>
1015
+ <div class="paper-links">
1016
+ <a href="#" class="link-btn">
1017
+ <i class="fas fa-external-link-alt"></i> Conference
1018
+ </a>
1019
+ </div>
1020
+ </div>
1021
+
1022
+ <div class="paper-item">
1023
+ <div class="paper-venue">ICRA 2025</div>
1024
+ <div class="paper-title">International Conference on Robotics and Automation</div>
1025
+ <div class="paper-description">Robotics-focused collaborative perception. Applications in autonomous driving and UAV swarms.</div>
1026
+ <div class="paper-links">
1027
+ <a href="#" class="link-btn">
1028
+ <i class="fas fa-external-link-alt"></i> Conference
1029
+ </a>
1030
+ </div>
1031
+ </div>
1032
+
1033
+ <div class="paper-item">
1034
+ <div class="paper-venue">NeurIPS 2024</div>
1035
+ <div class="paper-title">Neural Information Processing Systems</div>
1036
+ <div class="paper-description">Premier venue for machine learning research with strong collaborative perception track record.</div>
1037
+ <div class="paper-links">
1038
+ <a href="#" class="link-btn">
1039
+ <i class="fas fa-external-link-alt"></i> Conference
1040
+ </a>
1041
+ </div>
1042
+ </div>
1043
+ </div>
1044
+ `;
1045
+ }
1046
+
1047
+ function filterByVenue(section, venue) {
1048
+ const buttons = document.querySelectorAll(`#${section}Panel .filter-btn`);
1049
+ buttons.forEach(btn => btn.classList.remove('active'));
1050
+ event.target.classList.add('active');
1051
+
1052
+ const papers = document.querySelectorAll(`#${section}Papers .paper-item`);
1053
+ papers.forEach(paper => {
1054
+ if (venue === 'all' || paper.dataset.venue === venue) {
1055
+ paper.style.display = 'block';
1056
+ } else {
1057
+ paper.style.display = 'none';
1058
+ }
1059
+ });
1060
+ }
1061
+
1062
+ function filterDatasetType(type) {
1063
+ const buttons = document.querySelectorAll('#datasetsPanel .filter-btn');
1064
+ buttons.forEach(btn => btn.classList.remove('active'));
1065
+ event.target.classList.add('active');
1066
+
1067
+ const rows = document.querySelectorAll('#datasetsTable tbody tr');
1068
+ rows.forEach(row => {
1069
+ if (type === 'all' ||
1070
+ row.dataset.type === type ||
1071
+ (type === 'v2x' && row.dataset.agents.includes('v'))) {
1072
+ row.style.display = 'table-row';
1073
+ } else {
1074
+ row.style.display = 'none';
1075
+ }
1076
+ });
1077
+ }
1078
+
1079
+ function filterMethodType(type) {
1080
+ const buttons = document.querySelectorAll('#methodsPanel .filter-btn');
1081
+ buttons.forEach(btn => btn.classList.remove('active'));
1082
+ event.target.classList.add('active');
1083
+
1084
+ // This would filter methods based on category
1085
+ // Implementation depends on how you structure method data
1086
+ }
1087
+
1088
+ function filterPapers(section) {
1089
+ const searchTerm = event.target.value.toLowerCase();
1090
+ const papers = document.querySelectorAll(`#${section}Papers .paper-item`);
1091
+
1092
+ papers.forEach(paper => {
1093
+ const title = paper.querySelector('.paper-title').textContent.toLowerCase();
1094
+ const description = paper.querySelector('.paper-description').textContent.toLowerCase();
1095
+
1096
+ if (title.includes(searchTerm) || description.includes(searchTerm)) {
1097
+ paper.style.display = 'block';
1098
+ } else {
1099
+ paper.style.display = 'none';
1100
+ }
1101
+ });
1102
+ }
1103
+
1104
+ function filterDatasets() {
1105
+ const searchTerm = event.target.value.toLowerCase();
1106
+ const rows = document.querySelectorAll('#datasetsTable tbody tr');
1107
+
1108
+ rows.forEach(row => {
1109
+ const text = row.textContent.toLowerCase();
1110
+ if (text.includes(searchTerm)) {
1111
+ row.style.display = 'table-row';
1112
+ } else {
1113
+ row.style.display = 'none';
1114
+ }
1115
+ });
1116
+ }
1117
+
1118
+ function scrollToTop() {
1119
+ window.scrollTo({ top: 0, behavior: 'smooth' });
1120
+ }
1121
+
1122
+ // Show/hide back to top button
1123
+ window.addEventListener('scroll', function() {
1124
+ const backToTop = document.getElementById('backToTop');
1125
+ if (window.pageYOffset > 300) {
1126
+ backToTop.classList.add('visible');
1127
+ } else {
1128
+ backToTop.classList.remove('visible');
1129
+ }
1130
+ });
1131
+
1132
+ // Initialize the page
1133
+ document.addEventListener('DOMContentLoaded', function() {
1134
+ // Any initialization code here
1135
+ });
1136
+ </script>
1137
+ </body>
1138
+ </html>
index.html CHANGED
@@ -1,19 +1,1138 @@
1
- <!doctype html>
2
- <html>
3
- <head>
4
- <meta charset="utf-8" />
5
- <meta name="viewport" content="width=device-width" />
6
- <title>My static Space</title>
7
- <link rel="stylesheet" href="style.css" />
8
- </head>
9
- <body>
10
- <div class="card">
11
- <h1>Welcome to your static Space!</h1>
12
- <p>You can modify this app directly by editing <i>index.html</i> in the Files and versions tab.</p>
13
- <p>
14
- Also don't forget to check the
15
- <a href="https://huggingface.co/docs/hub/spaces" target="_blank">Spaces documentation</a>.
16
- </p>
17
- </div>
18
- </body>
19
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>๐Ÿค– Open Multi-Agent Collaborative Perception, Prediction, and Planning</title>
7
+ <link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" rel="stylesheet">
8
+ <style>
9
+ * {
10
+ margin: 0;
11
+ padding: 0;
12
+ box-sizing: border-box;
13
+ }
14
+
15
+ body {
16
+ font-family: 'Inter', -apple-system, BlinkMacSystemFont, sans-serif;
17
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
18
+ min-height: 100vh;
19
+ color: #333;
20
+ }
21
+
22
+ .container {
23
+ max-width: 1400px;
24
+ margin: 0 auto;
25
+ padding: 20px;
26
+ }
27
+
28
+ .header {
29
+ text-align: center;
30
+ margin-bottom: 40px;
31
+ color: white;
32
+ }
33
+
34
+ .header h1 {
35
+ font-size: 3rem;
36
+ font-weight: 700;
37
+ margin-bottom: 10px;
38
+ text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
39
+ }
40
+
41
+ .header p {
42
+ font-size: 1.2rem;
43
+ opacity: 0.9;
44
+ margin-bottom: 20px;
45
+ }
46
+
47
+ .stats-bar {
48
+ display: flex;
49
+ justify-content: center;
50
+ gap: 30px;
51
+ margin: 20px 0;
52
+ flex-wrap: wrap;
53
+ }
54
+
55
+ .stat-badge {
56
+ background: rgba(255,255,255,0.2);
57
+ padding: 10px 20px;
58
+ border-radius: 25px;
59
+ color: white;
60
+ font-weight: bold;
61
+ text-align: center;
62
+ }
63
+
64
+ .stat-number {
65
+ font-size: 1.5rem;
66
+ display: block;
67
+ }
68
+
69
+ .main-sections {
70
+ display: grid;
71
+ grid-template-columns: repeat(auto-fit, minmax(350px, 1fr));
72
+ gap: 30px;
73
+ margin-bottom: 40px;
74
+ }
75
+
76
+ .section-card {
77
+ background: rgba(255,255,255,0.95);
78
+ border-radius: 20px;
79
+ padding: 40px;
80
+ cursor: pointer;
81
+ transition: all 0.3s ease;
82
+ text-align: center;
83
+ box-shadow: 0 10px 30px rgba(0,0,0,0.1);
84
+ position: relative;
85
+ overflow: hidden;
86
+ }
87
+
88
+ .section-card::before {
89
+ content: '';
90
+ position: absolute;
91
+ top: 0;
92
+ left: -100%;
93
+ width: 100%;
94
+ height: 100%;
95
+ background: linear-gradient(90deg, transparent, rgba(255,255,255,0.4), transparent);
96
+ transition: left 0.5s ease;
97
+ }
98
+
99
+ .section-card:hover::before {
100
+ left: 100%;
101
+ }
102
+
103
+ .section-card:hover {
104
+ transform: translateY(-10px);
105
+ box-shadow: 0 20px 40px rgba(0,0,0,0.15);
106
+ }
107
+
108
+ .section-icon {
109
+ font-size: 4rem;
110
+ margin-bottom: 20px;
111
+ color: #667eea;
112
+ }
113
+
114
+ .perception-card .section-icon { color: #FF6B6B; }
115
+ .tracking-card .section-icon { color: #4ECDC4; }
116
+ .prediction-card .section-icon { color: #45B7D1; }
117
+ .datasets-card .section-icon { color: #96CEB4; }
118
+ .methods-card .section-icon { color: #FECA57; }
119
+ .conferences-card .section-icon { color: #A55EEA; }
120
+
121
+ .section-card h2 {
122
+ font-size: 1.8rem;
123
+ margin-bottom: 15px;
124
+ color: #333;
125
+ }
126
+
127
+ .section-card p {
128
+ color: #666;
129
+ font-size: 1rem;
130
+ line-height: 1.6;
131
+ margin-bottom: 20px;
132
+ }
133
+
134
+ .stats {
135
+ display: flex;
136
+ justify-content: space-around;
137
+ margin-top: 20px;
138
+ }
139
+
140
+ .stat {
141
+ text-align: center;
142
+ }
143
+
144
+ .stat-number {
145
+ font-size: 1.3rem;
146
+ font-weight: bold;
147
+ color: #667eea;
148
+ }
149
+
150
+ .stat-label {
151
+ font-size: 0.8rem;
152
+ color: #666;
153
+ }
154
+
155
+ .content-panel {
156
+ display: none;
157
+ background: rgba(255,255,255,0.95);
158
+ border-radius: 20px;
159
+ padding: 30px;
160
+ margin-top: 20px;
161
+ box-shadow: 0 10px 30px rgba(0,0,0,0.1);
162
+ }
163
+
164
+ .content-panel.active {
165
+ display: block;
166
+ animation: slideIn 0.3s ease;
167
+ }
168
+
169
+ @keyframes slideIn {
170
+ from { opacity: 0; transform: translateY(20px); }
171
+ to { opacity: 1; transform: translateY(0); }
172
+ }
173
+
174
+ .panel-header {
175
+ display: flex;
176
+ justify-content: space-between;
177
+ align-items: center;
178
+ margin-bottom: 30px;
179
+ padding-bottom: 15px;
180
+ border-bottom: 2px solid #eee;
181
+ }
182
+
183
+ .panel-title {
184
+ font-size: 2rem;
185
+ color: #333;
186
+ }
187
+
188
+ .close-btn {
189
+ background: #ff4757;
190
+ color: white;
191
+ border: none;
192
+ border-radius: 50%;
193
+ width: 40px;
194
+ height: 40px;
195
+ cursor: pointer;
196
+ font-size: 1.2rem;
197
+ transition: all 0.3s ease;
198
+ }
199
+
200
+ .close-btn:hover {
201
+ background: #ff3838;
202
+ transform: scale(1.1);
203
+ }
204
+
205
+ .papers-grid {
206
+ display: grid;
207
+ grid-template-columns: repeat(auto-fit, minmax(400px, 1fr));
208
+ gap: 20px;
209
+ margin-bottom: 30px;
210
+ }
211
+
212
+ .paper-item {
213
+ background: #f8f9fa;
214
+ border-radius: 15px;
215
+ padding: 20px;
216
+ border-left: 4px solid #667eea;
217
+ transition: all 0.3s ease;
218
+ }
219
+
220
+ .paper-item:hover {
221
+ transform: translateX(5px);
222
+ background: #e3f2fd;
223
+ box-shadow: 0 5px 15px rgba(0,0,0,0.1);
224
+ }
225
+
226
+ .paper-venue {
227
+ background: #667eea;
228
+ color: white;
229
+ padding: 4px 12px;
230
+ border-radius: 15px;
231
+ font-size: 0.8rem;
232
+ font-weight: bold;
233
+ display: inline-block;
234
+ margin-bottom: 10px;
235
+ }
236
+
237
+ .paper-title {
238
+ font-size: 1.1rem;
239
+ font-weight: 600;
240
+ color: #333;
241
+ margin-bottom: 8px;
242
+ }
243
+
244
+ .paper-description {
245
+ color: #666;
246
+ font-size: 0.9rem;
247
+ line-height: 1.4;
248
+ margin-bottom: 15px;
249
+ }
250
+
251
+ .paper-links {
252
+ display: flex;
253
+ gap: 10px;
254
+ flex-wrap: wrap;
255
+ }
256
+
257
+ .link-btn {
258
+ background: linear-gradient(45deg, #667eea, #764ba2);
259
+ color: white;
260
+ border: none;
261
+ padding: 6px 12px;
262
+ border-radius: 15px;
263
+ cursor: pointer;
264
+ font-size: 0.8rem;
265
+ text-decoration: none;
266
+ display: inline-flex;
267
+ align-items: center;
268
+ gap: 5px;
269
+ transition: all 0.3s ease;
270
+ }
271
+
272
+ .link-btn:hover {
273
+ transform: translateY(-2px);
274
+ box-shadow: 0 5px 15px rgba(102, 126, 234, 0.4);
275
+ }
276
+
277
+ .link-btn.code { background: linear-gradient(45deg, #4ECDC4, #44A08D); }
278
+ .link-btn.project { background: linear-gradient(45deg, #FF6B6B, #ee5a52); }
279
+
280
+ .search-container {
281
+ margin-bottom: 30px;
282
+ }
283
+
284
+ .search-box {
285
+ width: 100%;
286
+ max-width: 500px;
287
+ margin: 0 auto;
288
+ display: block;
289
+ padding: 15px 20px;
290
+ border: none;
291
+ border-radius: 25px;
292
+ font-size: 1rem;
293
+ box-shadow: 0 5px 15px rgba(0,0,0,0.1);
294
+ outline: none;
295
+ }
296
+
297
+ .filter-buttons {
298
+ display: flex;
299
+ justify-content: center;
300
+ gap: 10px;
301
+ margin: 20px 0;
302
+ flex-wrap: wrap;
303
+ }
304
+
305
+ .filter-btn {
306
+ background: rgba(255,255,255,0.9);
307
+ border: 2px solid #667eea;
308
+ color: #667eea;
309
+ padding: 8px 16px;
310
+ border-radius: 20px;
311
+ cursor: pointer;
312
+ transition: all 0.3s ease;
313
+ }
314
+
315
+ .filter-btn.active,
316
+ .filter-btn:hover {
317
+ background: #667eea;
318
+ color: white;
319
+ }
320
+
321
+ .back-to-top {
322
+ position: fixed;
323
+ bottom: 30px;
324
+ right: 30px;
325
+ background: #667eea;
326
+ color: white;
327
+ border: none;
328
+ border-radius: 50%;
329
+ width: 50px;
330
+ height: 50px;
331
+ cursor: pointer;
332
+ font-size: 1.2rem;
333
+ box-shadow: 0 5px 15px rgba(0,0,0,0.2);
334
+ transition: all 0.3s ease;
335
+ opacity: 0;
336
+ visibility: hidden;
337
+ }
338
+
339
+ .back-to-top.visible {
340
+ opacity: 1;
341
+ visibility: visible;
342
+ }
343
+
344
+ .back-to-top:hover {
345
+ transform: translateY(-3px);
346
+ background: #5a67d8;
347
+ }
348
+
349
+ @media (max-width: 768px) {
350
+ .main-sections {
351
+ grid-template-columns: 1fr;
352
+ gap: 20px;
353
+ }
354
+
355
+ .header h1 {
356
+ font-size: 2rem;
357
+ }
358
+
359
+ .papers-grid {
360
+ grid-template-columns: 1fr;
361
+ }
362
+
363
+ .section-card {
364
+ padding: 30px 20px;
365
+ }
366
+
367
+ .stats-bar {
368
+ gap: 15px;
369
+ }
370
+
371
+ .stat-badge {
372
+ padding: 8px 15px;
373
+ font-size: 0.9rem;
374
+ }
375
+ }
376
+
377
+ .dataset-table {
378
+ width: 100%;
379
+ border-collapse: collapse;
380
+ margin: 20px 0;
381
+ background: white;
382
+ border-radius: 10px;
383
+ overflow: hidden;
384
+ box-shadow: 0 5px 15px rgba(0,0,0,0.1);
385
+ }
386
+
387
+ .dataset-table th,
388
+ .dataset-table td {
389
+ padding: 12px 15px;
390
+ text-align: left;
391
+ border-bottom: 1px solid #eee;
392
+ }
393
+
394
+ .dataset-table th {
395
+ background: #667eea;
396
+ color: white;
397
+ font-weight: 600;
398
+ }
399
+
400
+ .dataset-table tr:hover {
401
+ background: #f8f9fa;
402
+ }
403
+
404
+ .tag {
405
+ background: #e3f2fd;
406
+ color: #1976d2;
407
+ padding: 2px 8px;
408
+ border-radius: 10px;
409
+ font-size: 0.8rem;
410
+ margin: 2px;
411
+ display: inline-block;
412
+ }
413
+ </style>
414
+ </head>
415
+ <body>
416
+ <div class="container">
417
+ <div class="header">
418
+ <h1><i class="fas fa-robot"></i> Awesome Multi-Agent Collaborative Perception</h1>
419
+ <p>Explore cutting-edge resources for Multi-Agent Collaborative Perception, Prediction, and Planning</p>
420
+
421
+ <div class="stats-bar">
422
+ <div class="stat-badge">
423
+ <span class="stat-number">200+</span>
424
+ <span>Papers</span>
425
+ </div>
426
+ <div class="stat-badge">
427
+ <span class="stat-number">25+</span>
428
+ <span>Datasets</span>
429
+ </div>
430
+ <div class="stat-badge">
431
+ <span class="stat-number">50+</span>
432
+ <span>Code Repos</span>
433
+ </div>
434
+ <div class="stat-badge">
435
+ <span class="stat-number">2025</span>
436
+ <span>Updated</span>
437
+ </div>
438
+ </div>
439
+ </div>
440
+
441
+ <div class="main-sections">
442
+ <div class="section-card perception-card" onclick="showContent('perception')">
443
+ <div class="section-icon">
444
+ <i class="fas fa-eye"></i>
445
+ </div>
446
+ <h2>๐Ÿ” Perception</h2>
447
+ <p>Multi-agent collaborative sensing, 3D object detection, semantic segmentation, and sensor fusion techniques for enhanced environmental understanding.</p>
448
+ <div class="stats">
449
+ <div class="stat">
450
+ <div class="stat-number">80+</div>
451
+ <div class="stat-label">Papers</div>
452
+ </div>
453
+ <div class="stat">
454
+ <div class="stat-number">V2X</div>
455
+ <div class="stat-label">Focus</div>
456
+ </div>
457
+ <div class="stat">
458
+ <div class="stat-number">15+</div>
459
+ <div class="stat-label">Venues</div>
460
+ </div>
461
+ </div>
462
+ </div>
463
+
464
+ <div class="section-card tracking-card" onclick="showContent('tracking')">
465
+ <div class="section-icon">
466
+ <i class="fas fa-route"></i>
467
+ </div>
468
+ <h2>๐Ÿ“ Tracking</h2>
469
+ <p>Multi-object tracking, collaborative state estimation, uncertainty quantification, and temporal consistency across multiple agents.</p>
470
+ <div class="stats">
471
+ <div class="stat">
472
+ <div class="stat-number">15+</div>
473
+ <div class="stat-label">Methods</div>
474
+ </div>
475
+ <div class="stat">
476
+ <div class="stat-number">MOT</div>
477
+ <div class="stat-label">Focus</div>
478
+ </div>
479
+ <div class="stat">
480
+ <div class="stat-number">5+</div>
481
+ <div class="stat-label">Datasets</div>
482
+ </div>
483
+ </div>
484
+ </div>
485
+
486
+ <div class="section-card prediction-card" onclick="showContent('prediction')">
487
+ <div class="section-icon">
488
+ <i class="fas fa-chart-line"></i>
489
+ </div>
490
+ <h2>๐Ÿ”ฎ Prediction</h2>
491
+ <p>Trajectory forecasting, motion prediction, behavior understanding, and cooperative planning for autonomous systems.</p>
492
+ <div class="stats">
493
+ <div class="stat">
494
+ <div class="stat-number">25+</div>
495
+ <div class="stat-label">Papers</div>
496
+ </div>
497
+ <div class="stat">
498
+ <div class="stat-number">GNN</div>
499
+ <div class="stat-label">Core Tech</div>
500
+ </div>
501
+ <div class="stat">
502
+ <div class="stat-number">E2E</div>
503
+ <div class="stat-label">Systems</div>
504
+ </div>
505
+ </div>
506
+ </div>
507
+
508
+ <div class="section-card datasets-card" onclick="showContent('datasets')">
509
+ <div class="section-icon">
510
+ <i class="fas fa-database"></i>
511
+ </div>
512
+ <h2>๐Ÿ“Š Datasets</h2>
513
+ <p>Real-world and simulated datasets for collaborative perception research, including benchmarks and evaluation protocols.</p>
514
+ <div class="stats">
515
+ <div class="stat">
516
+ <div class="stat-number">25+</div>
517
+ <div class="stat-label">Datasets</div>
518
+ </div>
519
+ <div class="stat">
520
+ <div class="stat-number">Real</div>
521
+ <div class="stat-label">& Sim</div>
522
+ </div>
523
+ <div class="stat">
524
+ <div class="stat-number">3D</div>
525
+ <div class="stat-label">Labels</div>
526
+ </div>
527
+ </div>
528
+ </div>
529
+
530
+ <div class="section-card methods-card" onclick="showContent('methods')">
531
+ <div class="section-icon">
532
+ <i class="fas fa-cogs"></i>
533
+ </div>
534
+ <h2>โš™๏ธ Methods</h2>
535
+ <p>Communication strategies, fusion techniques, robustness approaches, and learning paradigms for multi-agent systems.</p>
536
+ <div class="stats">
537
+ <div class="stat">
538
+ <div class="stat-number">60+</div>
539
+ <div class="stat-label">Methods</div>
540
+ </div>
541
+ <div class="stat">
542
+ <div class="stat-number">Comm</div>
543
+ <div class="stat-label">Efficient</div>
544
+ </div>
545
+ <div class="stat">
546
+ <div class="stat-number">Robust</div>
547
+ <div class="stat-label">Defense</div>
548
+ </div>
549
+ </div>
550
+ </div>
551
+
552
+ <div class="section-card conferences-card" onclick="showContent('conferences')">
553
+ <div class="section-icon">
554
+ <i class="fas fa-university"></i>
555
+ </div>
556
+ <h2>๐Ÿ›๏ธ Conferences</h2>
557
+ <p>Top-tier venues, workshops, and publication trends in collaborative perception and multi-agent systems research.</p>
558
+ <div class="stats">
559
+ <div class="stat">
560
+ <div class="stat-number">10+</div>
561
+ <div class="stat-label">Venues</div>
562
+ </div>
563
+ <div class="stat">
564
+ <div class="stat-number">2025</div>
565
+ <div class="stat-label">Latest</div>
566
+ </div>
567
+ <div class="stat">
568
+ <div class="stat-number">Trend</div>
569
+ <div class="stat-label">Analysis</div>
570
+ </div>
571
+ </div>
572
+ </div>
573
+ </div>
574
+
575
+ <!-- Content Panels -->
576
+ <div id="perceptionPanel" class="content-panel">
577
+ <div class="panel-header">
578
+ <h2 class="panel-title"><i class="fas fa-eye"></i> Collaborative Perception</h2>
579
+ <button class="close-btn" onclick="hideContent()">
580
+ <i class="fas fa-times"></i>
581
+ </button>
582
+ </div>
583
+
584
+ <div class="search-container">
585
+ <input type="text" class="search-box" placeholder="Search perception papers..." onkeyup="filterPapers('perception')">
586
+ </div>
587
+
588
+ <div class="filter-buttons">
589
+ <button class="filter-btn active" onclick="filterByVenue('perception', 'all')">All</button>
590
+ <button class="filter-btn" onclick="filterByVenue('perception', 'CVPR 2025')">CVPR 2025</button>
591
+ <button class="filter-btn" onclick="filterByVenue('perception', 'ICLR 2025')">ICLR 2025</button>
592
+ <button class="filter-btn" onclick="filterByVenue('perception', 'AAAI 2025')">AAAI 2025</button>
593
+ <button class="filter-btn" onclick="filterByVenue('perception', 'NeurIPS')">NeurIPS</button>
594
+ </div>
595
+
596
+ <div id="perceptionPapers" class="papers-grid">
597
+ <!-- Papers will be populated by JavaScript -->
598
+ </div>
599
+ </div>
600
+
601
+ <div id="trackingPanel" class="content-panel">
602
+ <div class="panel-header">
603
+ <h2 class="panel-title"><i class="fas fa-route"></i> Collaborative Tracking</h2>
604
+ <button class="close-btn" onclick="hideContent()">
605
+ <i class="fas fa-times"></i>
606
+ </button>
607
+ </div>
608
+
609
+ <div class="search-container">
610
+ <input type="text" class="search-box" placeholder="Search tracking papers..." onkeyup="filterPapers('tracking')">
611
+ </div>
612
+
613
+ <div id="trackingPapers" class="papers-grid">
614
+ <!-- Papers will be populated by JavaScript -->
615
+ </div>
616
+ </div>
617
+
618
+ <div id="predictionPanel" class="content-panel">
619
+ <div class="panel-header">
620
+ <h2 class="panel-title"><i class="fas fa-chart-line"></i> Collaborative Prediction</h2>
621
+ <button class="close-btn" onclick="hideContent()">
622
+ <i class="fas fa-times"></i>
623
+ </button>
624
+ </div>
625
+
626
+ <div class="search-container">
627
+ <input type="text" class="search-box" placeholder="Search prediction papers..." onkeyup="filterPapers('prediction')">
628
+ </div>
629
+
630
+ <div id="predictionPapers" class="papers-grid">
631
+ <!-- Papers will be populated by JavaScript -->
632
+ </div>
633
+ </div>
634
+
635
+ <div id="datasetsPanel" class="content-panel">
636
+ <div class="panel-header">
637
+ <h2 class="panel-title"><i class="fas fa-database"></i> Datasets & Benchmarks</h2>
638
+ <button class="close-btn" onclick="hideContent()">
639
+ <i class="fas fa-times"></i>
640
+ </button>
641
+ </div>
642
+
643
+ <div class="search-container">
644
+ <input type="text" class="search-box" placeholder="Search datasets..." onkeyup="filterDatasets()">
645
+ </div>
646
+
647
+ <div class="filter-buttons">
648
+ <button class="filter-btn active" onclick="filterDatasetType('all')">All</button>
649
+ <button class="filter-btn" onclick="filterDatasetType('real')">Real-World</button>
650
+ <button class="filter-btn" onclick="filterDatasetType('simulation')">Simulation</button>
651
+ <button class="filter-btn" onclick="filterDatasetType('v2x')">V2X</button>
652
+ </div>
653
+
654
+ <table class="dataset-table" id="datasetsTable">
655
+ <thead>
656
+ <tr>
657
+ <th>Dataset</th>
658
+ <th>Year</th>
659
+ <th>Type</th>
660
+ <th>Agents</th>
661
+ <th>Size</th>
662
+ <th>Features</th>
663
+ <th>Access</th>
664
+ </tr>
665
+ </thead>
666
+ <tbody>
667
+ <!-- Dataset rows will be populated by JavaScript -->
668
+ </tbody>
669
+ </table>
670
+ </div>
671
+
672
+ <div id="methodsPanel" class="content-panel">
673
+ <div class="panel-header">
674
+ <h2 class="panel-title"><i class="fas fa-cogs"></i> Methods & Techniques</h2>
675
+ <button class="close-btn" onclick="hideContent()">
676
+ <i class="fas fa-times"></i>
677
+ </button>
678
+ </div>
679
+
680
+ <div class="search-container">
681
+ <input type="text" class="search-box" placeholder="Search methods..." onkeyup="filterPapers('methods')">
682
+ </div>
683
+
684
+ <div class="filter-buttons">
685
+ <button class="filter-btn active" onclick="filterMethodType('all')">All</button>
686
+ <button class="filter-btn" onclick="filterMethodType('communication')">Communication</button>
687
+ <button class="filter-btn" onclick="filterMethodType('robustness')">Robustness</button>
688
+ <button class="filter-btn" onclick="filterMethodType('learning')">Learning</button>
689
+ </div>
690
+
691
+ <div id="methodsPapers" class="papers-grid">
692
+ <!-- Methods will be populated by JavaScript -->
693
+ </div>
694
+ </div>
695
+
696
+ <div id="conferencesPanel" class="content-panel">
697
+ <div class="panel-header">
698
+ <h2 class="panel-title"><i class="fas fa-university"></i> Conferences & Venues</h2>
699
+ <button class="close-btn" onclick="hideContent()">
700
+ <i class="fas fa-times"></i>
701
+ </button>
702
+ </div>
703
+
704
+ <div id="conferencesContent">
705
+ <!-- Conference content will be populated by JavaScript -->
706
+ </div>
707
+ </div>
708
+ </div>
709
+
710
+ <button class="back-to-top" id="backToTop" onclick="scrollToTop()">
711
+ <i class="fas fa-arrow-up"></i>
712
+ </button>
713
+
714
+ <script>
715
+ // Sample data - in a real implementation, this would come from your data sources
716
+ const perceptionPapers = [
717
+ {
718
+ title: "CoSDH: Communication-Efficient Collaborative Perception via Supply-Demand Awareness",
719
+ venue: "CVPR 2025",
720
+ description: "Novel approach for efficient collaborative perception using supply-demand awareness and intermediate-late hybridization.",
721
+ paper: "https://arxiv.org/abs/2503.03430",
722
+ code: "https://github.com/Xu2729/CoSDH",
723
+ project: null
724
+ },
725
+ {
726
+ title: "V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection",
727
+ venue: "CVPR 2025",
728
+ description: "Cooperative fusion of LiDAR and 4D radar sensors for enhanced 3D object detection with denoising diffusion.",
729
+ paper: "https://arxiv.org/abs/2411.08402",
730
+ code: "https://github.com/ylwhxht/V2X-R",
731
+ project: null
732
+ },
733
+ {
734
+ title: "STAMP: Scalable Task- And Model-Agnostic Collaborative Perception",
735
+ venue: "ICLR 2025",
736
+ description: "Framework for scalable collaborative perception that is both task and model agnostic.",
737
+ paper: "https://openreview.net/forum?id=8NdNniulYE",
738
+ code: "https://github.com/taco-group/STAMP",
739
+ project: null
740
+ },
741
+ {
742
+ title: "Where2comm: Efficient Collaborative Perception via Spatial Confidence Maps",
743
+ venue: "NeurIPS 2022",
744
+ description: "Groundbreaking work on efficient collaborative perception using spatial confidence maps for selective communication.",
745
+ paper: "https://openreview.net/forum?id=dLL4KXzKUpS",
746
+ code: "https://github.com/MediaBrain-SJTU/where2comm",
747
+ project: null
748
+ },
749
+ {
750
+ title: "CoBEVFlow: Robust Asynchronous Collaborative 3D Detection via Bird's Eye View Flow",
751
+ venue: "NeurIPS 2023",
752
+ description: "Handles temporal asynchrony in collaborative perception using bird's eye view flow.",
753
+ paper: "https://openreview.net/forum?id=UHIDdtxmVS",
754
+ code: "https://github.com/MediaBrain-SJTU/CoBEVFlow",
755
+ project: null
756
+ },
757
+ {
758
+ title: "UniV2X: End-to-End Autonomous Driving through V2X Cooperation",
759
+ venue: "AAAI 2025",
760
+ description: "Complete end-to-end system for autonomous driving with V2X cooperation.",
761
+ paper: "https://arxiv.org/abs/2404.00717",
762
+ code: "https://github.com/AIR-THU/UniV2X",
763
+ project: null
764
+ }
765
+ ];
766
+
767
+ const trackingPapers = [
768
+ {
769
+ title: "MOT-CUP: Multi-Object Tracking with Conformal Uncertainty Propagation",
770
+ venue: "Preprint",
771
+ description: "Collaborative multi-object tracking with conformal uncertainty propagation for robust state estimation.",
772
+ paper: "https://arxiv.org/abs/2303.14346",
773
+ code: "https://github.com/susanbao/mot_cup",
774
+ project: null
775
+ },
776
+ {
777
+ title: "DMSTrack: Probabilistic 3D Multi-Object Cooperative Tracking",
778
+ venue: "ICRA 2024",
779
+ description: "Probabilistic approach for 3D multi-object cooperative tracking using differentiable multi-sensor Kalman filter.",
780
+ paper: "https://arxiv.org/abs/2309.14655",
781
+ code: "https://github.com/eddyhkchiu/DMSTrack",
782
+ project: null
783
+ },
784
+ {
785
+ title: "CoDynTrust: Robust Asynchronous Collaborative Perception via Dynamic Feature Trust",
786
+ venue: "ICRA 2025",
787
+ description: "Dynamic feature trust modulus for robust asynchronous collaborative perception.",
788
+ paper: "https://arxiv.org/abs/2502.08169",
789
+ code: "https://github.com/CrazyShout/CoDynTrust",
790
+ project: null
791
+ }
792
+ ];
793
+
794
+ const predictionPapers = [
795
+ {
796
+ title: "V2X-Graph: Learning Cooperative Trajectory Representations",
797
+ venue: "NeurIPS 2024",
798
+ description: "Graph neural networks for learning cooperative trajectory representations in multi-agent systems.",
799
+ paper: "https://arxiv.org/abs/2311.00371",
800
+ code: "https://github.com/AIR-THU/V2X-Graph",
801
+ project: null
802
+ },
803
+ {
804
+ title: "Co-MTP: Cooperative Trajectory Prediction Framework",
805
+ venue: "ICRA 2025",
806
+ description: "Multi-temporal fusion framework for cooperative trajectory prediction in autonomous driving.",
807
+ paper: "https://arxiv.org/abs/2502.16589",
808
+ code: "https://github.com/xiaomiaozhang/Co-MTP",
809
+ project: null
810
+ },
811
+ {
812
+ title: "V2XPnP: Vehicle-to-Everything Spatio-Temporal Fusion",
813
+ venue: "Preprint",
814
+ description: "Spatio-temporal fusion approach for multi-agent perception and prediction in V2X systems.",
815
+ paper: "https://arxiv.org/abs/2412.01812",
816
+ code: "https://github.com/Zewei-Zhou/V2XPnP",
817
+ project: null
818
+ }
819
+ ];
820
+
821
+ const datasets = [
822
+ {
823
+ name: "DAIR-V2X",
824
+ year: "2022",
825
+ type: "real",
826
+ agents: "V2I",
827
+ size: "71K frames",
828
+ features: ["3D boxes", "Multi-modal", "Infrastructure"],
829
+ link: "https://github.com/AIR-THU/DAIR-V2X"
830
+ },
831
+ {
832
+ name: "V2V4Real",
833
+ year: "2023",
834
+ type: "real",
835
+ agents: "V2V",
836
+ size: "20K frames",
837
+ features: ["3D boxes", "Real V2V", "Highway"],
838
+ link: "https://github.com/ucla-mobility/V2V4Real"
839
+ },
840
+ {
841
+ name: "TUMTraf-V2X",
842
+ year: "2024",
843
+ type: "real",
844
+ agents: "V2X",
845
+ size: "2K sequences",
846
+ features: ["Dense labels", "Cooperative", "Urban"],
847
+ link: "https://github.com/tum-traffic-dataset/tum-traffic-dataset-dev-kit"
848
+ },
849
+ {
850
+ name: "OPV2V",
851
+ year: "2022",
852
+ type: "simulation",
853
+ agents: "V2V",
854
+ size: "Large-scale",
855
+ features: ["CARLA", "Multi-agent", "Benchmark"],
856
+ link: "https://github.com/DerrickXuNu/OpenCOOD"
857
+ },
858
+ {
859
+ name: "V2X-Sim",
860
+ year: "2021",
861
+ type: "simulation",
862
+ agents: "Multi",
863
+ size: "Scalable",
864
+ features: ["Multi-agent", "Collaborative", "Synthetic"],
865
+ link: "https://github.com/ai4ce/V2X-Sim"
866
+ }
867
+ ];
868
+
869
+ const methodsPapers = [
870
+ {
871
+ title: "ACCO: Is Discretization Fusion All You Need?",
872
+ venue: "Preprint",
873
+ description: "Investigation of discretization fusion techniques for collaborative perception efficiency.",
874
+ paper: "https://arxiv.org/abs/2503.13946",
875
+ code: "https://github.com/sidiangongyuan/ACCO",
876
+ project: null,
877
+ category: "communication"
878
+ },
879
+ {
880
+ title: "CP-Guard: Malicious Agent Detection and Defense",
881
+ venue: "AAAI 2025",
882
+ description: "Comprehensive framework for detecting and defending against malicious agents in collaborative perception.",
883
+ paper: "https://arxiv.org/abs/2412.12000",
884
+ code: null,
885
+ project: null,
886
+ category: "robustness"
887
+ },
888
+ {
889
+ title: "HEAL: Extensible Framework for Heterogeneous Collaborative Perception",
890
+ venue: "ICLR 2024",
891
+ description: "Open framework for heterogeneous collaborative perception with extensive customization options.",
892
+ paper: "https://openreview.net/forum?id=KkrDUGIASk",
893
+ code: "https://github.com/yifanlu0227/HEAL",
894
+ project: null,
895
+ category: "learning"
896
+ }
897
+ ];
898
+
899
+ function showContent(section) {
900
+ // Hide all panels
901
+ document.querySelectorAll('.content-panel').forEach(panel => {
902
+ panel.classList.remove('active');
903
+ });
904
+
905
+ // Show selected panel
906
+ document.getElementById(section + 'Panel').classList.add('active');
907
+
908
+ // Populate content based on section
909
+ if (section === 'perception') {
910
+ populatePapers('perceptionPapers', perceptionPapers);
911
+ } else if (section === 'tracking') {
912
+ populatePapers('trackingPapers', trackingPapers);
913
+ } else if (section === 'prediction') {
914
+ populatePapers('predictionPapers', predictionPapers);
915
+ } else if (section === 'datasets') {
916
+ populateDatasets();
917
+ } else if (section === 'methods') {
918
+ populatePapers('methodsPapers', methodsPapers);
919
+ } else if (section === 'conferences') {
920
+ populateConferences();
921
+ }
922
+
923
+ // Smooth scroll to panel
924
+ setTimeout(() => {
925
+ document.querySelector('.content-panel.active').scrollIntoView({
926
+ behavior: 'smooth',
927
+ block: 'start'
928
+ });
929
+ }, 100);
930
+ }
931
+
932
+ function hideContent() {
933
+ document.querySelectorAll('.content-panel').forEach(panel => {
934
+ panel.classList.remove('active');
935
+ });
936
+
937
+ // Scroll back to top
938
+ window.scrollTo({ top: 0, behavior: 'smooth' });
939
+ }
940
+
941
+ function populatePapers(containerId, papers) {
942
+ const container = document.getElementById(containerId);
943
+ let html = '';
944
+
945
+ papers.forEach(paper => {
946
+ html += `
947
+ <div class="paper-item" data-venue="${paper.venue}">
948
+ <div class="paper-venue">${paper.venue}</div>
949
+ <div class="paper-title">${paper.title}</div>
950
+ <div class="paper-description">${paper.description}</div>
951
+ <div class="paper-links">
952
+ <a href="${paper.paper}" class="link-btn" target="_blank">
953
+ <i class="fas fa-file-alt"></i> Paper
954
+ </a>
955
+ ${paper.code ? `<a href="${paper.code}" class="link-btn code" target="_blank">
956
+ <i class="fas fa-code"></i> Code
957
+ </a>` : ''}
958
+ ${paper.project ? `<a href="${paper.project}" class="link-btn project" target="_blank">
959
+ <i class="fas fa-globe"></i> Project
960
+ </a>` : ''}
961
+ </div>
962
+ </div>
963
+ `;
964
+ });
965
+
966
+ container.innerHTML = html;
967
+ }
968
+
969
+ function populateDatasets() {
970
+ const tbody = document.querySelector('#datasetsTable tbody');
971
+ let html = '';
972
+
973
+ datasets.forEach(dataset => {
974
+ const featureTags = dataset.features.map(feature =>
975
+ `<span class="tag">${feature}</span>`
976
+ ).join('');
977
+
978
+ html += `
979
+ <tr data-type="${dataset.type}" data-agents="${dataset.agents.toLowerCase()}">
980
+ <td><strong>${dataset.name}</strong></td>
981
+ <td>${dataset.year}</td>
982
+ <td>${dataset.type === 'real' ? '๐ŸŒ Real' : '๐ŸŽฎ Simulation'}</td>
983
+ <td>${dataset.agents}</td>
984
+ <td>${dataset.size}</td>
985
+ <td>${featureTags}</td>
986
+ <td><a href="${dataset.link}" class="link-btn" target="_blank">
987
+ <i class="fas fa-download"></i> Access
988
+ </a></td>
989
+ </tr>
990
+ `;
991
+ });
992
+
993
+ tbody.innerHTML = html;
994
+ }
995
+
996
+ function populateConferences() {
997
+ const container = document.getElementById('conferencesContent');
998
+ container.innerHTML = `
999
+ <div class="papers-grid">
1000
+ <div class="paper-item">
1001
+ <div class="paper-venue">CVPR 2025</div>
1002
+ <div class="paper-title">Computer Vision and Pattern Recognition</div>
1003
+ <div class="paper-description">5+ collaborative perception papers accepted. Focus on end-to-end systems and robustness.</div>
1004
+ <div class="paper-links">
1005
+ <a href="#" class="link-btn">
1006
+ <i class="fas fa-external-link-alt"></i> Conference
1007
+ </a>
1008
+ </div>
1009
+ </div>
1010
+
1011
+ <div class="paper-item">
1012
+ <div class="paper-venue">ICLR 2025</div>
1013
+ <div class="paper-title">International Conference on Learning Representations</div>
1014
+ <div class="paper-description">3+ papers accepted. Focus on learning representations and scalability frameworks.</div>
1015
+ <div class="paper-links">
1016
+ <a href="#" class="link-btn">
1017
+ <i class="fas fa-external-link-alt"></i> Conference
1018
+ </a>
1019
+ </div>
1020
+ </div>
1021
+
1022
+ <div class="paper-item">
1023
+ <div class="paper-venue">ICRA 2025</div>
1024
+ <div class="paper-title">International Conference on Robotics and Automation</div>
1025
+ <div class="paper-description">Robotics-focused collaborative perception. Applications in autonomous driving and UAV swarms.</div>
1026
+ <div class="paper-links">
1027
+ <a href="#" class="link-btn">
1028
+ <i class="fas fa-external-link-alt"></i> Conference
1029
+ </a>
1030
+ </div>
1031
+ </div>
1032
+
1033
+ <div class="paper-item">
1034
+ <div class="paper-venue">NeurIPS 2024</div>
1035
+ <div class="paper-title">Neural Information Processing Systems</div>
1036
+ <div class="paper-description">Premier venue for machine learning research with strong collaborative perception track record.</div>
1037
+ <div class="paper-links">
1038
+ <a href="#" class="link-btn">
1039
+ <i class="fas fa-external-link-alt"></i> Conference
1040
+ </a>
1041
+ </div>
1042
+ </div>
1043
+ </div>
1044
+ `;
1045
+ }
1046
+
1047
+ function filterByVenue(section, venue) {
1048
+ const buttons = document.querySelectorAll(`#${section}Panel .filter-btn`);
1049
+ buttons.forEach(btn => btn.classList.remove('active'));
1050
+ event.target.classList.add('active');
1051
+
1052
+ const papers = document.querySelectorAll(`#${section}Papers .paper-item`);
1053
+ papers.forEach(paper => {
1054
+ if (venue === 'all' || paper.dataset.venue === venue) {
1055
+ paper.style.display = 'block';
1056
+ } else {
1057
+ paper.style.display = 'none';
1058
+ }
1059
+ });
1060
+ }
1061
+
1062
+ function filterDatasetType(type) {
1063
+ const buttons = document.querySelectorAll('#datasetsPanel .filter-btn');
1064
+ buttons.forEach(btn => btn.classList.remove('active'));
1065
+ event.target.classList.add('active');
1066
+
1067
+ const rows = document.querySelectorAll('#datasetsTable tbody tr');
1068
+ rows.forEach(row => {
1069
+ if (type === 'all' ||
1070
+ row.dataset.type === type ||
1071
+ (type === 'v2x' && row.dataset.agents.includes('v'))) {
1072
+ row.style.display = 'table-row';
1073
+ } else {
1074
+ row.style.display = 'none';
1075
+ }
1076
+ });
1077
+ }
1078
+
1079
+ function filterMethodType(type) {
1080
+ const buttons = document.querySelectorAll('#methodsPanel .filter-btn');
1081
+ buttons.forEach(btn => btn.classList.remove('active'));
1082
+ event.target.classList.add('active');
1083
+
1084
+ // This would filter methods based on category
1085
+ // Implementation depends on how you structure method data
1086
+ }
1087
+
1088
+ function filterPapers(section) {
1089
+ const searchTerm = event.target.value.toLowerCase();
1090
+ const papers = document.querySelectorAll(`#${section}Papers .paper-item`);
1091
+
1092
+ papers.forEach(paper => {
1093
+ const title = paper.querySelector('.paper-title').textContent.toLowerCase();
1094
+ const description = paper.querySelector('.paper-description').textContent.toLowerCase();
1095
+
1096
+ if (title.includes(searchTerm) || description.includes(searchTerm)) {
1097
+ paper.style.display = 'block';
1098
+ } else {
1099
+ paper.style.display = 'none';
1100
+ }
1101
+ });
1102
+ }
1103
+
1104
+ function filterDatasets() {
1105
+ const searchTerm = event.target.value.toLowerCase();
1106
+ const rows = document.querySelectorAll('#datasetsTable tbody tr');
1107
+
1108
+ rows.forEach(row => {
1109
+ const text = row.textContent.toLowerCase();
1110
+ if (text.includes(searchTerm)) {
1111
+ row.style.display = 'table-row';
1112
+ } else {
1113
+ row.style.display = 'none';
1114
+ }
1115
+ });
1116
+ }
1117
+
1118
+ function scrollToTop() {
1119
+ window.scrollTo({ top: 0, behavior: 'smooth' });
1120
+ }
1121
+
1122
+ // Show/hide back to top button
1123
+ window.addEventListener('scroll', function() {
1124
+ const backToTop = document.getElementById('backToTop');
1125
+ if (window.pageYOffset > 300) {
1126
+ backToTop.classList.add('visible');
1127
+ } else {
1128
+ backToTop.classList.remove('visible');
1129
+ }
1130
+ });
1131
+
1132
+ // Initialize the page
1133
+ document.addEventListener('DOMContentLoaded', function() {
1134
+ // Any initialization code here
1135
+ });
1136
+ </script>
1137
+ </body>
1138
+ </html>
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ gradio==4.44.0
simple-app.py ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+
3
+ # Sample data for demonstration
4
+ perception_papers = [
5
+ {
6
+ "title": "CoSDH: Communication-Efficient Collaborative Perception",
7
+ "venue": "CVPR 2025",
8
+ "description": "Novel approach for efficient collaborative perception using supply-demand awareness.",
9
+ "link": "https://arxiv.org/abs/2503.03430"
10
+ },
11
+ {
12
+ "title": "V2X-R: Cooperative LiDAR-4D Radar Fusion",
13
+ "venue": "CVPR 2025",
14
+ "description": "Cooperative fusion of LiDAR and 4D radar sensors for enhanced 3D object detection.",
15
+ "link": "https://arxiv.org/abs/2411.08402"
16
+ },
17
+ {
18
+ "title": "Where2comm: Efficient Collaborative Perception via Spatial Confidence Maps",
19
+ "venue": "NeurIPS 2022",
20
+ "description": "Groundbreaking work on efficient collaborative perception using spatial confidence maps.",
21
+ "link": "https://openreview.net/forum?id=dLL4KXzKUpS"
22
+ }
23
+ ]
24
+
25
+ datasets_data = [
26
+ ["DAIR-V2X", "2022", "Real-world", "V2I", "71K frames", "3D boxes, Infrastructure"],
27
+ ["V2V4Real", "2023", "Real-world", "V2V", "20K frames", "Real V2V, Highway"],
28
+ ["OPV2V", "2022", "Simulation", "V2V", "Large-scale", "CARLA, Multi-agent"],
29
+ ["V2X-Sim", "2021", "Simulation", "Multi", "Scalable", "Multi-agent, Collaborative"]
30
+ ]
31
+
32
+ def create_paper_card(paper):
33
+ return f"""
34
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; margin: 10px 0; background: white;">
35
+ <div style="background: #667eea; color: white; padding: 5px 10px; border-radius: 15px; display: inline-block; font-size: 0.8em; margin-bottom: 10px;">
36
+ {paper['venue']}
37
+ </div>
38
+ <h3 style="color: #333; margin: 10px 0;">{paper['title']}</h3>
39
+ <p style="color: #666; line-height: 1.5; margin-bottom: 15px;">{paper['description']}</p>
40
+ <a href="{paper['link']}" target="_blank" style="background: #667eea; color: white; padding: 8px 15px; border-radius: 5px; text-decoration: none; font-size: 0.9em;">
41
+ ๐Ÿ“„ Read Paper
42
+ </a>
43
+ </div>
44
+ """
45
+
46
+ # Custom CSS
47
+ custom_css = """
48
+ .gradio-container {
49
+ max-width: 1200px !important;
50
+ }
51
+ .main-header {
52
+ text-align: center;
53
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
54
+ color: white;
55
+ padding: 40px 20px;
56
+ border-radius: 15px;
57
+ margin-bottom: 30px;
58
+ }
59
+ .stats-grid {
60
+ display: grid;
61
+ grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
62
+ gap: 20px;
63
+ margin: 20px 0;
64
+ }
65
+ .stat-card {
66
+ background: rgba(255,255,255,0.1);
67
+ padding: 20px;
68
+ border-radius: 10px;
69
+ text-align: center;
70
+ }
71
+ """
72
+
73
+ # Create the interface
74
+ with gr.Blocks(
75
+ title="๐Ÿค– Awesome Multi-Agent Collaborative Perception",
76
+ theme=gr.themes.Soft(),
77
+ css=custom_css
78
+ ) as demo:
79
+
80
+ # Header
81
+ gr.HTML("""
82
+ <div class="main-header">
83
+ <h1 style="font-size: 2.5rem; margin-bottom: 10px;">๐Ÿค– Awesome Multi-Agent Collaborative Perception</h1>
84
+ <p style="font-size: 1.2rem; opacity: 0.9;">Explore cutting-edge resources for Multi-Agent Collaborative Perception, Prediction, and Planning</p>
85
+ <div style="display: flex; justify-content: center; gap: 30px; margin-top: 20px; flex-wrap: wrap;">
86
+ <div style="background: rgba(255,255,255,0.2); padding: 10px 20px; border-radius: 25px;">
87
+ <div style="font-size: 1.5rem; font-weight: bold;">200+</div>
88
+ <div>Papers</div>
89
+ </div>
90
+ <div style="background: rgba(255,255,255,0.2); padding: 10px 20px; border-radius: 25px;">
91
+ <div style="font-size: 1.5rem; font-weight: bold;">25+</div>
92
+ <div>Datasets</div>
93
+ </div>
94
+ <div style="background: rgba(255,255,255,0.2); padding: 10px 20px; border-radius: 25px;">
95
+ <div style="font-size: 1.5rem; font-weight: bold;">50+</div>
96
+ <div>Code Repos</div>
97
+ </div>
98
+ </div>
99
+ </div>
100
+ """)
101
+
102
+ # Main navigation tabs
103
+ with gr.Tabs():
104
+
105
+ with gr.Tab("๐Ÿ” Perception"):
106
+ gr.Markdown("## Multi-Agent Collaborative Perception Papers")
107
+
108
+ # Create paper cards
109
+ papers_html = "".join([create_paper_card(paper) for paper in perception_papers])
110
+ gr.HTML(papers_html)
111
+
112
+ with gr.Tab("๐Ÿ“Š Datasets"):
113
+ gr.Markdown("## Datasets & Benchmarks")
114
+
115
+ gr.Dataframe(
116
+ value=datasets_data,
117
+ headers=["Dataset", "Year", "Type", "Agents", "Size", "Features"],
118
+ datatype=["str", "str", "str", "str", "str", "str"],
119
+ interactive=False
120
+ )
121
+
122
+ gr.Markdown("""
123
+ ### Notable Datasets:
124
+ - **DAIR-V2X**: First real-world V2I collaborative perception dataset
125
+ - **V2V4Real**: Real vehicle-to-vehicle communication dataset
126
+ - **OPV2V**: Large-scale simulation benchmark in CARLA
127
+ - **V2X-Sim**: Comprehensive multi-agent simulation platform
128
+ """)
129
+
130
+ with gr.Tab("๐Ÿ“ Tracking"):
131
+ gr.Markdown("## Multi-Object Tracking & State Estimation")
132
+
133
+ gr.HTML("""
134
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); gap: 20px;">
135
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; background: white;">
136
+ <h3>MOT-CUP</h3>
137
+ <p>Multi-Object Tracking with Conformal Uncertainty Propagation</p>
138
+ <a href="https://arxiv.org/abs/2303.14346" target="_blank" style="color: #667eea;">๐Ÿ“„ Paper</a>
139
+ </div>
140
+ <div style="border: 1px solid #ddd; border-radius: 10px; padding: 20px; background: white;">
141
+ <h3>DMSTrack</h3>
142
+ <p>Probabilistic 3D Multi-Object Cooperative Tracking (ICRA 2024)</p>
143
+ <a href="https://arxiv.org/abs/2309.14655" target="_blank" style="color: #667eea;">๐Ÿ“„ Paper</a>
144
+ </div>
145
+ </div>
146
+ """)
147
+
148
+ with gr.Tab("๐Ÿ”ฎ Prediction"):
149
+ gr.Markdown("## Trajectory Forecasting & Motion Prediction")
150
+
151
+ gr.HTML("""
152
+ <div style="background: #f8f9fa; border-radius: 10px; padding: 20px; margin: 20px 0;">
153
+ <h3>๐Ÿง  Key Approaches:</h3>
154
+ <ul style="line-height: 1.8;">
155
+ <li><strong>Graph Neural Networks</strong>: Modeling agent interactions</li>
156
+ <li><strong>Transformer Architectures</strong>: Attention-based prediction</li>
157
+ <li><strong>Multi-Modal Fusion</strong>: Combining different sensor modalities</li>
158
+ <li><strong>Uncertainty Quantification</strong>: Reliable confidence estimation</li>
159
+ </ul>
160
+ </div>
161
+ """)
162
+
163
+ with gr.Tab("๐Ÿ›๏ธ Conferences"):
164
+ gr.Markdown("## Top Venues & Publication Trends")
165
+
166
+ conference_data = [
167
+ ["CVPR 2025", "5+", "End-to-end systems, robustness"],
168
+ ["ICLR 2025", "3+", "Learning representations, scalability"],
169
+ ["AAAI 2025", "4+", "AI applications, defense mechanisms"],
170
+ ["ICRA 2025", "6+", "Robotics applications, real-world deployment"],
171
+ ["NeurIPS 2024", "2+", "Theoretical foundations, novel architectures"]
172
+ ]
173
+
174
+ gr.Dataframe(
175
+ value=conference_data,
176
+ headers=["Conference", "Papers", "Focus Areas"],
177
+ datatype=["str", "str", "str"],
178
+ interactive=False
179
+ )
180
+
181
+ # Footer
182
+ gr.HTML("""
183
+ <div style="text-align: center; margin-top: 40px; padding: 30px; background: #f8f9fa; border-radius: 10px;">
184
+ <h3>๐Ÿค Contributing</h3>
185
+ <p>We welcome contributions! Please submit papers, datasets, and code repositories via GitHub.</p>
186
+ <div style="margin-top: 20px;">
187
+ <a href="https://github.com/your-username/awesome-multi-agent-collaborative-perception" target="_blank"
188
+ style="background: #667eea; color: white; padding: 10px 20px; border-radius: 5px; text-decoration: none; margin: 5px;">
189
+ ๐Ÿ“š GitHub Repository
190
+ </a>
191
+ <a href="https://huggingface.co/spaces/your-username/awesome-multi-agent-collaborative-perception" target="_blank"
192
+ style="background: #ff6b6b; color: white; padding: 10px 20px; border-radius: 5px; text-decoration: none; margin: 5px;">
193
+ ๐Ÿค— Hugging Face Space
194
+ </a>
195
+ </div>
196
+ </div>
197
+ """)
198
+
199
+ if __name__ == "__main__":
200
+ demo.launch()