Datasets:
fix readme
Browse files
README.md
CHANGED
@@ -161,7 +161,19 @@ The dataset only contains 'how-to' questions and their answers. Therefore, it ma
|
|
161 |
|
162 |
There are two primary ways to load the QA dataset part:
|
163 |
|
164 |
-
1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
165 |
This will result in a list of dictionaries, each representing a single instance in the dataset.
|
166 |
|
167 |
To load full dataset:
|
@@ -170,18 +182,7 @@ To load full dataset:
|
|
170 |
import json
|
171 |
|
172 |
dataset = []
|
173 |
-
with open('
|
174 |
for l in f:
|
175 |
dataset.append(json.loads(l))
|
176 |
```
|
177 |
-
|
178 |
-
2. From the Hugging Face Datasets Hub:
|
179 |
-
|
180 |
-
If the dataset is hosted on the Hugging Face Datasets Hub, you can load it directly using the datasets library:
|
181 |
-
|
182 |
-
```python
|
183 |
-
from datasets import load_dataset
|
184 |
-
dataset = load_dataset('Lurunchik/WikiHowNFQA"')
|
185 |
-
```
|
186 |
-
This will return a DatasetDict object, which is a dictionary-like object that maps split names (e.g., 'train', 'validation', 'test') to Dataset objects.
|
187 |
-
You can access a specific split like so: dataset['train'].
|
|
|
161 |
|
162 |
There are two primary ways to load the QA dataset part:
|
163 |
|
164 |
+
1. From the Hugging Face Datasets Hub:
|
165 |
+
|
166 |
+
If the dataset is hosted on the Hugging Face Datasets Hub, you can load it directly using the datasets library:
|
167 |
+
|
168 |
+
```python
|
169 |
+
from datasets import load_dataset
|
170 |
+
dataset = load_dataset('Lurunchik/WikiHowNFQA"')
|
171 |
+
```
|
172 |
+
This will return a DatasetDict object, which is a dictionary-like object that maps split names (e.g., 'train', 'validation', 'test') to Dataset objects.
|
173 |
+
You can access a specific split like so: dataset['train'].
|
174 |
+
|
175 |
+
|
176 |
+
2. Directly from the file (if you have the .jsonl file locally, you can load the dataset using the following Python code).
|
177 |
This will result in a list of dictionaries, each representing a single instance in the dataset.
|
178 |
|
179 |
To load full dataset:
|
|
|
182 |
import json
|
183 |
|
184 |
dataset = []
|
185 |
+
with open('train.jsonl') as f: # change to test.jsonl and valid.jsonl
|
186 |
for l in f:
|
187 |
dataset.append(json.loads(l))
|
188 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|