Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,8 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
tags:
|
4 |
-
- code
|
5 |
-
---
|
6 |
|
7 |
boolean networks, each sampled over 8192 timesteps, ranging from network size 32-1024, with connectivities (sizes of functions) per item taken from random gaussian mixture models
|
8 |
|
@@ -17,4 +17,15 @@ initial_state
|
|
17 |
function_keys # think left (input) side of truth table
|
18 |
function_values # think right (output) side of truth table
|
19 |
function_inputs # the node indices that go into this function
|
20 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- code
|
5 |
+
---
|
6 |
|
7 |
boolean networks, each sampled over 8192 timesteps, ranging from network size 32-1024, with connectivities (sizes of functions) per item taken from random gaussian mixture models
|
8 |
|
|
|
17 |
function_keys # think left (input) side of truth table
|
18 |
function_values # think right (output) side of truth table
|
19 |
function_inputs # the node indices that go into this function
|
20 |
+
```
|
21 |
+
|
22 |
+
for a tiny subset with 128 networks, for example if testing a pipeline, try:
|
23 |
+
|
24 |
+
```python
|
25 |
+
from datasets import load_dataset
|
26 |
+
|
27 |
+
dataset = load_dataset(
|
28 |
+
"parquet", data_files=[
|
29 |
+
"https://huggingface.co/datasets/midwestern-simulation/boolean-networks/resolve/main/chunk_gUKO6HBS_128_8192_32-1024.parquet"
|
30 |
+
]
|
31 |
+
)
|