Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Sub-tasks:
language-modeling
Languages:
code
Size:
100K - 1M
DOI:
License:
Commit
·
b53a39b
1
Parent(s):
38d0bfb
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
license: other
|
5 |
+
language_creators:
|
6 |
+
- crowdsourced
|
7 |
+
language:
|
8 |
+
- code
|
9 |
+
task_categories:
|
10 |
+
- text-generation
|
11 |
+
tags:
|
12 |
+
- code, kotlin, native Android development
|
13 |
+
size_categories:
|
14 |
+
- 100K<n<1M
|
15 |
+
source_datasets: []
|
16 |
+
pretty_name: iva-kotlin-codeint-raw
|
17 |
+
task_ids:
|
18 |
+
- language-modeling
|
19 |
+
---
|
20 |
+
|
21 |
+
# IVA Kotlin GitHub Code Dataset
|
22 |
+
|
23 |
+
## Dataset Description
|
24 |
+
|
25 |
+
This is the raw IVA Kotlin dataset extracted from GitHub.
|
26 |
+
It contains uncurated Kotlin files gathered with the purpose to train a code generation model.
|
27 |
+
|
28 |
+
The dataset consists of 464215 kotlin code files from GitHub totaling ~361 MB of data.
|
29 |
+
The dataset was created from the public GitHub dataset on Google BiqQuery.
|
30 |
+
|
31 |
+
### How to use it
|
32 |
+
|
33 |
+
To download the full dataset:
|
34 |
+
|
35 |
+
```python
|
36 |
+
from datasets import load_dataset
|
37 |
+
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
|
38 |
+
```
|
39 |
+
|
40 |
+
```python
|
41 |
+
|
42 |
+
from datasets import load_dataset
|
43 |
+
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
|
44 |
+
print(dataset[723])
|
45 |
+
|
46 |
+
#OUTPUT:
|
47 |
+
{
|
48 |
+
"repo_name":"nemerosa/ontrack",
|
49 |
+
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
|
50 |
+
"copies":"1",
|
51 |
+
"size":"3248",
|
52 |
+
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
|
53 |
+
"license":"mit"
|
54 |
+
}
|
55 |
+
```
|
56 |
+
|
57 |
+
## Data Structure
|
58 |
+
|
59 |
+
### Data Fields
|
60 |
+
|
61 |
+
|Field|Type|Description|
|
62 |
+
|---|---|---|
|
63 |
+
|repo_name|string|name of the GitHub repository|
|
64 |
+
|path|string|path of the file in GitHub repository|
|
65 |
+
|copies|string|number of occurrences in dataset|
|
66 |
+
|code|string|content of source file|
|
67 |
+
|size|string|size of the source file in bytes|
|
68 |
+
|license|string|license of GitHub repository|
|
69 |
+
|
70 |
+
### Instance
|
71 |
+
|
72 |
+
```json
|
73 |
+
{
|
74 |
+
"repo_name":"nemerosa/ontrack",
|
75 |
+
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
|
76 |
+
"copies":"1",
|
77 |
+
"size":"3248",
|
78 |
+
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
|
79 |
+
"license":"mit"
|
80 |
+
}
|
81 |
+
```
|
82 |
+
|
83 |
+
## Languages
|
84 |
+
|
85 |
+
The dataset contains only Kotlin files.
|
86 |
+
|
87 |
+
```json
|
88 |
+
{
|
89 |
+
"Kotlin": [".kt"]
|
90 |
+
}
|
91 |
+
```
|
92 |
+
|
93 |
+
## Licenses
|
94 |
+
|
95 |
+
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
|
96 |
+
|
97 |
+
```json
|
98 |
+
{
|
99 |
+
"agpl-3.0": 9146,
|
100 |
+
"apache-2.0": 272388,
|
101 |
+
"artistic-2.0": 219,
|
102 |
+
"bsd-2-clause": 896,
|
103 |
+
"bsd-3-clause": 12328,
|
104 |
+
"cc0-1.0": 411,
|
105 |
+
"epl-1.0": 2111,
|
106 |
+
"gpl-2.0": 11080,
|
107 |
+
"gpl-3.0": 48911,
|
108 |
+
"isc": 997,
|
109 |
+
"lgpl-2.1": 297,
|
110 |
+
"lgpl-3.0": 7749,
|
111 |
+
"mit": 92540,
|
112 |
+
"mpl-2.0": 3386,
|
113 |
+
"unlicense": 1756
|
114 |
+
}
|
115 |
+
```
|
116 |
+
|
117 |
+
## Dataset Statistics
|
118 |
+
|
119 |
+
```json
|
120 |
+
{
|
121 |
+
"Total size": "~361 MB",
|
122 |
+
"Number of files": 464215,
|
123 |
+
"Number of files under 500 bytes": 99845,
|
124 |
+
"Average file size in bytes": 3252,
|
125 |
+
}
|
126 |
+
```
|
127 |
+
|
128 |
+
## Dataset Creation
|
129 |
+
|
130 |
+
The dataset was created using Google Query for Github:
|
131 |
+
https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code
|
132 |
+
|
133 |
+
The following steps were pursued for data
|
134 |
+
gathering:
|
135 |
+
1. Creation of a dataset and a table in Google Big Query Project.
|
136 |
+
2. Creation of a bucket in Google Cloud Storage.
|
137 |
+
3. Creation of a query in Google Big Query Project.
|
138 |
+
4. Running the query with the setting to output the results in the dataset and table
|
139 |
+
created at step one.
|
140 |
+
5. Exporting the resulting dataset into the bucket created in step 2. Export format of JSON with gzip compression.
|
141 |
+
|
142 |
+
The result of these steps leads to the following results:
|
143 |
+
* 2.7 TB Processed,
|
144 |
+
* number of extracted rows/files was 464,215
|
145 |
+
* total logical bytes 1.46 GB.
|
146 |
+
* the result amounts to 7 json.gz files in a total of 361 MB.
|
147 |
+
|
148 |
+
The SQL Query used is:
|
149 |
+
```sql
|
150 |
+
SELECT
|
151 |
+
f.repo_name, f.path, c.copies, c.size, c.content, l.license
|
152 |
+
FROM
|
153 |
+
(select f.*, row_number() over (partition by id order by path desc) as seqnum from `bigquery-public-data.github_repos.files` AS f) f
|
154 |
+
JOIN
|
155 |
+
`bigquery-public-data.github_repos.contents` AS c
|
156 |
+
ON
|
157 |
+
f.id = c.id AND seqnum=1
|
158 |
+
JOIN
|
159 |
+
`bigquery-public-data.github_repos.licenses` AS l
|
160 |
+
ON
|
161 |
+
f.repo_name = l.repo_name
|
162 |
+
WHERE
|
163 |
+
NOT c.binary AND ((f.path LIKE '%.kt') AND (c.size BETWEEN 0 AND 1048575))
|
164 |
+
```
|
165 |
+
|
166 |
+
## Data Splits
|
167 |
+
The dataset only contains a train split.
|
168 |
+
|
169 |
+
Using the curated version of this dataset, a split was made into multiple repositories:
|
170 |
+
* Clean Version: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean
|
171 |
+
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train
|
172 |
+
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid
|
173 |
+
|
174 |
+
# Considerations for Using the Data
|
175 |
+
|
176 |
+
The dataset consists of source code from a wide range of repositories.
|
177 |
+
As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.
|