Datasets:
Tasks:
Audio Classification
Modalities:
Audio
Formats:
soundfolder
Languages:
Tatar
Size:
1K - 10K
License:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tatar Speech Commands Dataset
|
2 |
+
|
3 |
+
This dataset contains 3,547 one-second utterances of 35 commands commonly used in robotics, IoT, and smart systems. The data was collected from 153 speakers.
|
4 |
+
|
5 |
+
## Dataset Statistics
|
6 |
+
|
7 |
+
* **Number of commands:** 35
|
8 |
+
* **Number of utterances:** 3,547
|
9 |
+
* **Number of speakers:** 153
|
10 |
+
* **Audio length:** 1 second per utterance
|
11 |
+
|
12 |
+
## Data Download
|
13 |
+
|
14 |
+
The dataset can be downloaded from [Google Drive](https://drive.google.com/file/d/1CBmVeAYgNrkNKhL1wtG7KUKuLJ9hOfHL/view?usp=sharing).
|
15 |
+
|
16 |
+
## Related Work
|
17 |
+
|
18 |
+
The provided Keyword-MLP model ([https://github.com/AI-Research-BD/Keyword-MLP](https://github.com/AI-Research-BD/Keyword-MLP)) was used for training and testing on this dataset. The TatarSCR repository ([https://github.com/IS2AI/TatarSCR.git](https://github.com/IS2AI/TatarSCR.git)) contains the code and configurations used in this work. Preprocessing and augmentation can be performed using the `data_preprocessing_augmentation.ipynb` notebook, which requires the ESC-50 dataset ([https://github.com/karolpiczak/ESC-50](https://github.com/karolpiczak/ESC-50)).
|
19 |
+
|
20 |
+
|
21 |
+
## Model Inference
|
22 |
+
|
23 |
+
Inference can be performed using PyTorch or ONNX. PyTorch offers two scripts: `inference.py` for short audio clips and `window_inference.py` for longer clips using a sliding window approach. ONNX inference is handled by `onnx_inference.py`. The `label_map.json` file is required for inference.
|