|
--- |
|
license: mit |
|
language: |
|
- en |
|
tags: |
|
- code |
|
--- |
|
# MultiLang Code Parser Dataset (MLCPD) |
|
|
|
## Dataset Description |
|
|
|
The MultiLang Code Parser Dataset (MLCPD) is a comprehensive multi-language code dataset designed to benchmark language-agnostic AI code parsers. It currently offers a filtered version of the StarCoder dataset, parsed with language-specific parsers, with future plans to unify outputs into a standard JSON format for complete AST representation. |
|
|
|
### Key Features |
|
|
|
- **Cleaned and Filtered Code**: Samples have been processed to remove outliers in terms of line length and code size |
|
- **Quality Metrics**: Each sample includes metadata about average line length and line count of code along with AST node count and error count |
|
- **Multi-language Support**: 10 programming languages represented in separate subsets |
|
- **Consistent Format**: All samples follow the same Parquet structure for easy processing |
|
|
|
### Dataset Size |
|
|
|
The complete dataset is approximately 35GB in size. Individual language files vary in size, with the largest being C++ (5.85GB) and the smallest being Ruby (1.71GB). |
|
|
|
### Dataset Statistics |
|
|
|
| Language | Sample Count | Avg. Line Length | Avg. Line Count | |
|
|------------|--------------|------------------|-----------------| |
|
| C | 700,821 | 28.08 | 61.76 | |
|
| C++ | 707,641 | 28.16 | 87.88 | |
|
| C# | 705,203 | 29.53 | 44.26 | |
|
| Go | 700,331 | 25.18 | 68.22 | |
|
| Java | 711,922 | 30.85 | 54.40 | |
|
| JavaScript | 687,775 | 27.69 | 44.15 | |
|
| Python | 706,126 | 32.67 | 54.70 | |
|
| Ruby | 703,473 | 27.35 | 27.41 | |
|
| Scala | 702,833 | 35.30 | 44.38 | |
|
| TypeScript | 695,597 | 29.18 | 36.89 | |
|
|
|
## Dataset Structure |
|
|
|
The dataset is organized with separate Parquet files for each programming language: |
|
- `c_parsed_1.parquet` ... `c_parsed_4.parquet` - C language samples |
|
- `cpp_parsed_1.parquet` ... `cpp_parsed_4.parquet` - C++ language samples |
|
- `c_sharp_parsed_1.parquet` ... `c_sharp_parsed_4.parquet` - C# language samples |
|
- `go_parsed_1.parquet` ... `go_parsed_4.parquet` - Go language samples |
|
- `java_parsed_1.parquet` ... `java_parsed_4.parquet` - Java language samples |
|
- `javascript_parsed_1.parquet` ... `javascript_parsed_4.parquet` - JavaScript language samples |
|
- `python_parsed_1.parquet` ... `python_parsed_4.parquet` - Python language samples |
|
- `ruby_parsed_1.parquet` ... `ruby_parsed_4.parquet` - Ruby language samples |
|
- `scala_parsed_1.parquet` ... `scala_parsed_4.parquet` - Scala language samples |
|
- `typescript_parsed_1.parquet` ... `typescript_parsed_4.parquet` - TypeScript language samples |
|
|
|
Within each file, data is stored with the following schema: |
|
|
|
``` |
|
- language: string (the programming language of the code sample) |
|
- code: string (the complete code content) |
|
- avg_line_length: float (average character count per line) |
|
- line_count: integer (total number of lines in the code) |
|
- lang_specific_parse: string (tree-sitter parsed output of the code sample) |
|
- ast_node_count: integer (total number of nodes in the AST) |
|
- num_errors: integer (total number of errors in the code) |
|
``` |
|
|
|
Each sample is stored as a row in the Parquet file with these four columns. |
|
|
|
## How to Access the Dataset |
|
|
|
### Using the Hugging Face `datasets` Library |
|
|
|
This dataset is hosted on the Hugging Face Hub and can be easily accessed using the `datasets` library. |
|
|
|
#### Install the Required Library |
|
|
|
```bash |
|
pip install datasets |
|
``` |
|
|
|
#### Import Library |
|
|
|
```python |
|
from datasets import load_dataset |
|
``` |
|
|
|
#### Load the Entire Dataset |
|
|
|
```python |
|
dataset = load_dataset( |
|
"jugalgajjar/MultiLang-Code-Parser-Dataset" |
|
) |
|
``` |
|
|
|
#### Load a Specific Language |
|
|
|
```python |
|
dataset = load_dataset( |
|
"jugalgajjar/MultiLang-Code-Parser-Dataset", |
|
data_files="python_parsed_1.parquet" |
|
) |
|
``` |
|
|
|
#### Stream Data |
|
|
|
```python |
|
dataset = load_dataset( |
|
"jugalgajjar/MultiLang-Code-Parser-Dataset", |
|
data_files="python_parsed_1.parquet", |
|
streaming=True |
|
) |
|
``` |
|
|
|
#### Access Data Content (After Downloading) |
|
|
|
```python |
|
try: |
|
for example in dataset["train"].take(5): |
|
print(example) |
|
print("-"*25) |
|
except Exception as e: |
|
print(f"An error occurred: {e}") |
|
``` |
|
|
|
### Manual Download |
|
|
|
You can also manually download specific language files from the Hugging Face repository page: |
|
|
|
1. Visit `https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset` |
|
2. Navigate to the "Files" tab |
|
3. Click on the language file you want to download (e.g., `python_parsed_1.parquet`) |
|
4. Use the download button to save the file locally |
|
|
|
## Dataset Creation |
|
|
|
This dataset was created through the following process: |
|
|
|
1. Original code samples were collected from the StarCoder dataset ([URL](https://huggingface.co/datasets/bigcode/starcoderdata)) |
|
2. Statistical analysis was performed to identify quality metrics |
|
3. Outliers were removed using IQR (Interquartile Range) method |
|
4. Samples were filtered to remove excessively long or short code examples |
|
5. Data was normalized and standardized across languages |
|
6. Metadata (average line length and line count) was calculated for each sample |
|
7. Data was serialized in the efficient Parquet format for optimal storage and access speed |
|
8. Code samples from each language were parsed using language-specific tree-sitter parsers |
|
9. Metadata (AST node count and number of errors in the code) were recorded for each sample |
|
10. Final data was split into four files and stored in the Parquet format |
|
|
|
## Citation |
|
|
|
If you use this dataset in your research or project, please cite it as follows: |
|
|
|
```bibtex |
|
@misc{mlcpd2025, |
|
author = {Jugal Gajjar, Kamalasankari Subramaniakuppusamy, Kaustik Ranaware}, |
|
title = {Filtered CodeStar Dataset Mini}, |
|
year = {2025}, |
|
publisher = {HuggingFace}, |
|
howpublished = {\url{https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset}} |
|
} |
|
``` |
|
|
|
## License |
|
|
|
This dataset is released under the MIT License. See the LICENSE file for more details. |