Update README.md
Browse files
README.md
CHANGED
@@ -65,4 +65,120 @@ configs:
|
|
65 |
- split: val
|
66 |
path: "Simglucose-high/val/*.parquet"
|
67 |
---
|
68 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
- split: val
|
66 |
path: "Simglucose-high/val/*.parquet"
|
67 |
---
|
68 |
+
# Dataset Card for NeoRL‑2: Near Real‑World Benchmarks for Offline Reinforcement Learning
|
69 |
+
|
70 |
+
## Dataset Summary
|
71 |
+
|
72 |
+
**NeoRL-2** is a collection of seven near–real-world offline-RL datasets *plus* their evaluation simulators. This repo we provide the offline-RL dataset, while the simulators are in <https://github.com/polixir/NeoRL2>.
|
73 |
+
|
74 |
+
Each task injects one or more realistic challenges—delays, exogenous disturbances, global safety constraints, traditional rule-based data, and/or severe data scarcity—into a lightweight control environment.
|
75 |
+
|
76 |
+
|
77 |
+
---
|
78 |
+
|
79 |
+
## Dataset Details
|
80 |
+
|
81 |
+
| Challenge | Brief description | Appears in |
|
82 |
+
|-----------|-------------------|------------|
|
83 |
+
| **Delay** | Long & variable observation-to-effect latency | Pipeline, Simglucose |
|
84 |
+
| **External factors** | State variables the agent cannot influence (e.g. wind, ground-friction) | RocketRecovery, RandomFrictionHopper, Simglucose |
|
85 |
+
| **Global safety constraints** | Hard limits that must never be violated | SafetyHalfCheetah |
|
86 |
+
| **Rule-based behaviour policy** | Trajectories from a PID or other deterministic controller | DMSD |
|
87 |
+
| **Severely limited data** | Tiny datasets reflecting expensive experimentation | Fusion, RocketRecovery, SafetyHalfCheetah |
|
88 |
+
|
89 |
+
* **Curated by:** Polixir Technologies
|
90 |
+
* **Paper:** Gao *et al.* “NeoRL-2: Near Real-World Benchmarks for Offline Reinforcement Learning with Extended Realistic Scenarios”, arXiv:2503.19267 (2025)
|
91 |
+
* **Repository (the environments for the datasets are in here):** <https://github.com/polixir/NeoRL2>
|
92 |
+
* **Task:** offline / batch reinforcement learning
|
93 |
+
|
94 |
+
## Uses
|
95 |
+
|
96 |
+
### Direct Use
|
97 |
+
* Benchmarking offline-RL algorithms under near-deployment conditions
|
98 |
+
* Studying robustness to delays, safety limits, exogenous disturbances and data scarcity
|
99 |
+
* Developing data-efficient model-based or model-free methods able to outperform conservative behaviour policies
|
100 |
+
|
101 |
+
#### Loading example
|
102 |
+
```python
|
103 |
+
from datasets import load_dataset
|
104 |
+
|
105 |
+
dmsd = load_dataset("polixir/neorl2", "DMSD", split="train")
|
106 |
+
state, action, reward, next_state, done = dmsd[0].values()
|
107 |
+
```
|
108 |
+
|
109 |
+
### Out-of-Scope Use
|
110 |
+
* Online RL with unlimited interaction
|
111 |
+
* Safety-critical decision-making without extensive validation on the real system
|
112 |
+
|
113 |
+
|
114 |
+
---
|
115 |
+
|
116 |
+
## Dataset Structure
|
117 |
+
|
118 |
+
Each Parquet row contains
|
119 |
+
|
120 |
+
| Key | Type | Description |
|
121 |
+
|--------------------|-------------|-------------------------------------------------|
|
122 |
+
| `observations` | float32[] | Raw observation vector (dim varies per task) |
|
123 |
+
| `actions` | float32[] | Continuous action taken by the behaviour policy |
|
124 |
+
| `rewards` | float32 | Scalar reward |
|
125 |
+
| `next_observations`| float32[] | Observation at the next timestep |
|
126 |
+
| `terminals` | bool | `True` if episode ended (termination or safety) |
|
127 |
+
|
128 |
+
Typical dataset sizes are **≈100 k transitions**; *Fusion*, *RocketRecovery* and *SafetyHalfCheetah* are smaller by design.
|
129 |
+
|
130 |
+
---
|
131 |
+
|
132 |
+
## Baseline Benchmark
|
133 |
+
|
134 |
+
### Normalised return (0 – 100) – best of 3 seeds
|
135 |
+
|
136 |
+
| Task | Data | BC | CQL | EDAC | MCQ | TD3BC | MOPO | COMBO | RAMBO | MOBILE |
|
137 |
+
|------|------|----|----|------|----|------|------|------|------|-------|
|
138 |
+
| **Pipeline** | 69.25 | 68.6 ± 13.4 | **81.1 ± 8.3** | 72.9 ± 4.6 | 49.7 ± 7.4 | **81.9 ± 7.5** | −26.3 ± 92.7 | 55.5 ± 4.3 | 24.1 ± 74.4 | 65.5 ± 4.1 |
|
139 |
+
| **Simglucose** | 73.9 | **75.1 ± 0.7** | 11.0 ± 3.4 | 8.1 ± 0.3 | 29.6 ± 5.7 | **74.2 ± 0.4** | 34.6 ± 28.1 | 23.2 ± 2.5 | 10.8 ± 0.9 | 9.3 ± 0.2 |
|
140 |
+
| **RocketRecovery** | 75.3 | 72.8 ± 2.5 | 74.3 ± 1.4 | 65.7 ± 9.8 | **76.5 ± 0.8** | **79.7 ± 0.9** | −27.7 ± 105.6 | 74.7 ± 0.7 | −44.2 ± 263.0 | 43.7 ± 17.5 |
|
141 |
+
| **RandomFrictionHopper** | 28.7 | 28.0 ± 0.3 | 33.0 ± 1.2 | **34.7 ± 1.3** | 31.7 ± 1.3 | 29.5 ± 0.7 | 32.5 ± 5.8 | 34.1 ± 4.7 | 29.6 ± 7.2 | **35.1 ± 0.5** |
|
142 |
+
| **DMSD** | 56.6 | 65.1 ± 1.6 | 70.2 ± 1.1 | **78.7 ± 2.3** | **77.8 ± 1.2** | 60.0 ± 0.8 | 68.2 ± 0.7 | 68.3 ± 0.4 | 76.2 ± 1.9 | 64.4 ± 0.8 |
|
143 |
+
| **Fusion** | 48.8 | 55.2 ± 0.3 | 55.9 ± 1.9 | **58.0 ± 0.7** | 49.7 ± 1.1 | 54.6 ± 0.8 | −11.6 ± 22.2 | 55.5 ± 0.3 | **59.6 ± 5.0** | 5.0 ± 7.1 |
|
144 |
+
| **SafetyHalfCheetah** | 73.6 | 70.2 ± 0.4 | 71.2 ± 0.6 | 53.1 ± 11.1 | 54.7 ± 4.3 | 68.6 ± 0.4 | 23.7 ± 24.3 | 57.8 ± 13.3 | −422.4 ± 307.5 | 8.7 ± 3.9 |
|
145 |
+
|
146 |
+
### How often do algorithms beat the behaviour policy?
|
147 |
+
|
148 |
+
| Margin | BC | CQL | EDAC | MCQ | TD3BC | MOPO | COMBO | RAMBO | MOBILE |
|
149 |
+
|--------|----|----|----|----|------|------|------|------|-------|
|
150 |
+
| ≥ 0 | 3 | 4 | 4 | 4 | **6** | 2 | 3 | 3 | 2 |
|
151 |
+
| ≥ +3 | 2 | 4 | 4 | 2 | **4** | 2 | 3 | 2 | 2 |
|
152 |
+
| ≥ +5 | 2 | 3 | 3 | 1 | **2** | 1 | 3 | 2 | 2 |
|
153 |
+
| ≥ +10 | 0 | 2 | 1 | 1 | **1** | 1 | 1 | 2 | 0 |
|
154 |
+
|
155 |
+
### Key conclusions
|
156 |
+
|
157 |
+
* No baseline “solves” any task (score ≥ 95). Best result is TD3BC’s 81.9 on *Pipeline*.
|
158 |
+
* **TD3BC** is the most reliable algorithm, surpassing the data in 6 / 7 tasks and still leading at stricter margins.
|
159 |
+
* Model-based methods (MOPO, RAMBO, and MOBILE) are brittle, with large variance and occasional catastrophic divergence.
|
160 |
+
* *DMSD* is easiest: many algorithms exceed the behaviour policy by 20 + points thanks to simple PID data.
|
161 |
+
* *SafetyHalfCheetah* is hardest: every method trails the data due to strict safety penalties and limited samples.
|
162 |
+
* In general, model-free approaches show smaller error bars than model-based ones, underlining the challenge of learning accurate dynamics under delay, disturbance and scarcity.
|
163 |
+
|
164 |
+
---
|
165 |
+
|
166 |
+
## Citation
|
167 |
+
|
168 |
+
```bibtex
|
169 |
+
@misc{gao2025neorl2,
|
170 |
+
title = {NeoRL-2: Near Real-World Benchmarks for Offline Reinforcement Learning with Extended Realistic Scenarios},
|
171 |
+
author = {Songyi Gao and Zuolin Tu and Rong-Jun Qin and Yi-Hao Sun and Xiong-Hui Chen and Yang Yu},
|
172 |
+
year = {2025},
|
173 |
+
eprint = {2503.19267},
|
174 |
+
archivePrefix = {arXiv},
|
175 |
+
primaryClass = {cs.LG}
|
176 |
+
}
|
177 |
+
```
|
178 |
+
|
179 |
+
---
|
180 |
+
|
181 |
+
## Contact
|
182 |
+
|
183 |
+
Questions or bug reports? Please open an issue on the [NeoRL-2 GitHub repo](https://github.com/polixir/NeoRL2).
|
184 |
+
|