Datasets:

Modalities:
Audio
Languages:
Tatar
Libraries:
Datasets
License:
rassulya commited on
Commit
444cac0
·
verified ·
1 Parent(s): 0caace1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -6
README.md CHANGED
@@ -1,23 +1,33 @@
1
  # Tatar Speech Commands Dataset
2
 
3
- This dataset contains 3,547 one-second utterances of 35 commands commonly used in robotics, IoT, and smart systems. The data was collected from 153 speakers.
4
 
5
  ## Dataset Statistics
6
 
 
7
  * **Number of commands:** 35
8
  * **Number of utterances:** 3,547
9
- * **Number of speakers:** 153
10
  * **Audio length:** 1 second per utterance
11
 
12
  ## Data Download
13
 
14
  The dataset can be downloaded from [Google Drive](https://drive.google.com/file/d/1CBmVeAYgNrkNKhL1wtG7KUKuLJ9hOfHL/view?usp=sharing).
15
 
16
- ## Related Work
 
 
 
 
 
 
 
 
 
 
 
17
 
18
- The provided Keyword-MLP model ([https://github.com/AI-Research-BD/Keyword-MLP](https://github.com/AI-Research-BD/Keyword-MLP)) was used for training and testing on this dataset. The TatarSCR repository ([https://github.com/IS2AI/TatarSCR.git](https://github.com/IS2AI/TatarSCR.git)) contains the code and configurations used in this work. Preprocessing and augmentation can be performed using the `data_preprocessing_augmentation.ipynb` notebook, which requires the ESC-50 dataset ([https://github.com/karolpiczak/ESC-50](https://github.com/karolpiczak/ESC-50)).
19
 
 
20
 
21
- ## Model Inference
22
 
23
- Inference can be performed using PyTorch or ONNX. PyTorch offers two scripts: `inference.py` for short audio clips and `window_inference.py` for longer clips using a sliding window approach. ONNX inference is handled by `onnx_inference.py`. The `label_map.json` file is required for inference.
 
1
  # Tatar Speech Commands Dataset
2
 
3
+ This dataset contains 3,547 one-second utterances of 35 commands commonly used in robotics, IoT, and smart systems. The recordings were collected from 153 different speakers. The data is suitable for training and evaluating speech command recognition models.
4
 
5
  ## Dataset Statistics
6
 
7
+ * **Number of speakers:** 153
8
  * **Number of commands:** 35
9
  * **Number of utterances:** 3,547
 
10
  * **Audio length:** 1 second per utterance
11
 
12
  ## Data Download
13
 
14
  The dataset can be downloaded from [Google Drive](https://drive.google.com/file/d/1CBmVeAYgNrkNKhL1wtG7KUKuLJ9hOfHL/view?usp=sharing).
15
 
16
+ ## Preprocessing and Augmentation
17
+
18
+ A Jupyter Notebook (`data_preprocessing_augmentation.ipynb`) is provided for data preprocessing and augmentation. This notebook requires the ESC-50 dataset ([https://github.com/karolpiczak/ESC-50](https://github.com/karolpiczak/ESC-50)).
19
+
20
+ ## Model
21
+
22
+ The provided Keyword-MLP model ([https://github.com/AI-Research-BD/Keyword-MLP](https://github.com/AI-Research-BD/Keyword-MLP)) was used for training and testing on this dataset.
23
+
24
+ ## Inference
25
+
26
+ Inference can be performed using either PyTorch or ONNX runtime. Scripts are provided for both short (approximately 1-second) audio clips and longer clips processed using a sliding window approach.
27
+
28
 
29
+ ## License
30
 
31
+ [Specify License here]
32
 
 
33