Datasets:
Tasks:
Image Classification
Formats:
imagefolder
Sub-tasks:
multi-class-classification
Languages:
English
Size:
1K - 10K
License:
Update Repository Information
Browse files- LICENSE +14 -0
- README.md +73 -5
- default.jsonl +0 -0
- metadata.jsonl +0 -0
- scripts/gen_img.py +1 -1
- scripts/save_img.py +1 -1
- scripts/test.py +0 -32
LICENSE
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This dataset includes content under multiple licenses:
|
2 |
+
|
3 |
+
- Real images from Nechintosh/ghibli: studio-ghibli-nc-license
|
4 |
+
- AI images from:
|
5 |
+
- nitrosocke/Ghibli-Diffusion: CreativeML OpenRAIL-M
|
6 |
+
Loaded with: torch_dtype=torch.float16
|
7 |
+
- KappaNeuro/studio-ghibli-style: CreativeML OpenRAIL++-M (fine-tuned from SDXL)
|
8 |
+
Loaded with: torch_dtype=torch.float16, variant="fp16"
|
9 |
+
- Note: While the KappaNeuro repository does not explicitly state a license, it is based on Stability AI's SDXL, which is released under CreativeML Open RAIL++-M. Therefore, it is assumed that this model inherits the same license and non-commercial restrictions.
|
10 |
+
|
11 |
+
The combined dataset is provided under the most restrictive condition:
|
12 |
+
**Non-commercial research and educational use only. Commercial use is strictly prohibited.**
|
13 |
+
|
14 |
+
For details, please refer to the original model and dataset licenses.
|
README.md
CHANGED
@@ -1,15 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Ghibli Real vs AI-Generated Dataset
|
2 |
|
3 |
This dataset is provided in two forms:
|
4 |
|
5 |
-
### 1. `
|
6 |
- One sample per line
|
7 |
-
- Includes: `image`, `label`, `description
|
8 |
- Use this for standard classification or image-text training
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
### 2. `pairs.jsonl`
|
11 |
- Real and fake images paired together
|
12 |
-
- Includes: `real_image`, `
|
13 |
- Use this for contrastive learning or meta-learning (e.g., ProtoNet)
|
14 |
|
15 |
### How to load
|
@@ -18,8 +54,40 @@ This dataset is provided in two forms:
|
|
18 |
from datasets import load_dataset
|
19 |
|
20 |
# Single image classification
|
21 |
-
samples = load_dataset("pulnip/ghibli-dataset",
|
22 |
|
23 |
# Paired meta-learning structure
|
24 |
pairs = load_dataset("pulnip/ghibli-dataset", data_files="pairs.jsonl", split="train")
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
size_categories:
|
3 |
+
- 1K<n<10K
|
4 |
+
license: other
|
5 |
+
tags:
|
6 |
+
- ghibli
|
7 |
+
- ai-generated
|
8 |
+
- image-classification
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
pretty_name: Ghibli Real vs AI Dataset
|
12 |
+
task_categories:
|
13 |
+
- image-classification
|
14 |
+
task_ids:
|
15 |
+
- multi-class-classification
|
16 |
+
splits:
|
17 |
+
- name: train
|
18 |
+
num_examples: 4347
|
19 |
+
annotations_creators:
|
20 |
+
- machine-generated
|
21 |
+
source_datasets:
|
22 |
+
- Nechintosh/ghibli
|
23 |
+
- nitrosocke/Ghibli-Diffusion
|
24 |
+
- KappaNeuro/studio-ghibli-style
|
25 |
+
dataset_info:
|
26 |
+
labels:
|
27 |
+
- real
|
28 |
+
- ai
|
29 |
+
---
|
30 |
+
|
31 |
# Ghibli Real vs AI-Generated Dataset
|
32 |
|
33 |
This dataset is provided in two forms:
|
34 |
|
35 |
+
### 1. (default) `metadata.jsonl`
|
36 |
- One sample per line
|
37 |
+
- Includes: `id`, `image`, `label`, `description`
|
38 |
- Use this for standard classification or image-text training
|
39 |
|
40 |
+
- Real images sourced from [Nechintosh/ghibli](https://huggingface.co/datasets/Nechintosh/ghibli) (810 images)
|
41 |
+
- AI-generated images created using:
|
42 |
+
- [nitrosocke/Ghibli-Diffusion](https://huggingface.co/nitrosocke/Ghibli-Diffusion) (2727 images)
|
43 |
+
- [KappaNeuro/studio-ghibli-style](https://huggingface.co/KappaNeuro/studio-ghibli-style) (810 images)
|
44 |
+
- Note: While the KappaNeuro repository does not explicitly state a license, it is a fine-tuned model based on [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), which is distributed under the CreativeML Open RAIL++-M License. Therefore, it is assumed that this model inherits the same license and non-commercial restrictions.
|
45 |
+
|
46 |
### 2. `pairs.jsonl`
|
47 |
- Real and fake images paired together
|
48 |
+
- Includes: `real_image`, `ai_image`, shared `description`, `seed`
|
49 |
- Use this for contrastive learning or meta-learning (e.g., ProtoNet)
|
50 |
|
51 |
### How to load
|
|
|
54 |
from datasets import load_dataset
|
55 |
|
56 |
# Single image classification
|
57 |
+
samples = load_dataset("pulnip/ghibli-dataset", split="train")
|
58 |
|
59 |
# Paired meta-learning structure
|
60 |
pairs = load_dataset("pulnip/ghibli-dataset", data_files="pairs.jsonl", split="train")
|
61 |
+
|
62 |
+
# Convert labels to binary classification: 'real' vs 'ai'
|
63 |
+
# Note: The original "label" field contains "real", "nitrosocke", and "KappaNeuro".
|
64 |
+
# You can treat all non-"real" labels as "ai" to use this dataset for binary classification.
|
65 |
+
for sample in samples:
|
66 |
+
sample["binary_label"] = "real" if sample["label"] == "real" else "ai"
|
67 |
+
```
|
68 |
+
|
69 |
+
## License and Usage
|
70 |
+
|
71 |
+
This dataset combines data from multiple sources. Please review the licensing conditions carefully.
|
72 |
+
|
73 |
+
### Real Images
|
74 |
+
- Source: [Nechintosh/ghibli](https://huggingface.co/datasets/Nechintosh/ghibli)
|
75 |
+
- License: Not explicitly stated; assumed for **non-commercial research use only**
|
76 |
+
|
77 |
+
### AI-Generated Images
|
78 |
+
- Source models:
|
79 |
+
- [nitrosocke/Ghibli-Diffusion](https://huggingface.co/nitrosocke/Ghibli-Diffusion)
|
80 |
+
Loaded with: `torch_dtype=torch.float16`
|
81 |
+
- [KappaNeuro/studio-ghibli-style](https://huggingface.co/KappaNeuro/studio-ghibli-style)
|
82 |
+
Loaded with: `torch_dtype=torch.float16, variant="fp16"`
|
83 |
+
- These models are provided under community licenses that generally restrict usage to **non-commercial and research purposes**.
|
84 |
+
|
85 |
+
---
|
86 |
+
|
87 |
+
### Summary
|
88 |
+
|
89 |
+
This repository is not published under a single license such as MIT.
|
90 |
+
Because the dataset includes content from multiple sources with varying restrictions,
|
91 |
+
**the dataset is licensed under 'other' and should be treated as non-commercial research-use only.**
|
92 |
+
|
93 |
+
Users are responsible for reviewing each component’s license terms before redistribution or adaptation.
|
default.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
metadata.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
scripts/gen_img.py
CHANGED
@@ -36,7 +36,7 @@ NUM_IMAGES = 3
|
|
36 |
out_dir = f"data/{ID_PREFIX}"
|
37 |
os.makedirs(out_dir, exist_ok=True)
|
38 |
|
39 |
-
with open("
|
40 |
open("ai_entries.jsonl", "w", encoding="utf-8") as fout, \
|
41 |
open("pairs.jsonl", "a", encoding="utf-8") as pairs:
|
42 |
for i, line in enumerate(fin):
|
|
|
36 |
out_dir = f"data/{ID_PREFIX}"
|
37 |
os.makedirs(out_dir, exist_ok=True)
|
38 |
|
39 |
+
with open("metadata.jsonl", "r", encoding="utf-8") as fin, \
|
40 |
open("ai_entries.jsonl", "w", encoding="utf-8") as fout, \
|
41 |
open("pairs.jsonl", "a", encoding="utf-8") as pairs:
|
42 |
for i, line in enumerate(fin):
|
scripts/save_img.py
CHANGED
@@ -18,7 +18,7 @@ samples.sort(key=lambda s: natural_key(os.path.basename(s["image"]["path"])))
|
|
18 |
|
19 |
os.makedirs("data/real", exist_ok=True)
|
20 |
|
21 |
-
with open("
|
22 |
for i, sample in enumerate(samples):
|
23 |
caption = sample["caption"]
|
24 |
src_path: str = sample["image"]["path"]
|
|
|
18 |
|
19 |
os.makedirs("data/real", exist_ok=True)
|
20 |
|
21 |
+
with open("metadata.jsonl", "w") as f:
|
22 |
for i, sample in enumerate(samples):
|
23 |
caption = sample["caption"]
|
24 |
src_path: str = sample["image"]["path"]
|
scripts/test.py
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
import re
|
3 |
-
|
4 |
-
def sort_pairs_by_ai_image(input_path, output_path):
|
5 |
-
# '..._<숫자1>_<숫자2>.jpg' 패턴: prefix는 숫자1 전까지 전부
|
6 |
-
pattern = re.compile(r"(.+)_(\d+)_(\d+)\.jpg$")
|
7 |
-
entries = []
|
8 |
-
|
9 |
-
# 1. 읽어서 (prefix, num1, num2, entry)로 저장
|
10 |
-
with open(input_path, 'r', encoding='utf-8') as infile:
|
11 |
-
for line in infile:
|
12 |
-
entry = json.loads(line)
|
13 |
-
ai_img = entry.get('ai_image', '')
|
14 |
-
m = pattern.match(ai_img)
|
15 |
-
if m:
|
16 |
-
prefix, num1, num2 = m.groups()
|
17 |
-
entries.append((prefix, int(num1), int(num2), entry))
|
18 |
-
else:
|
19 |
-
# 패턴 안 맞으면 맨 뒤로
|
20 |
-
entries.append((ai_img, float('inf'), float('inf'), entry))
|
21 |
-
|
22 |
-
# 2. (prefix, num1, num2) 순서로 정렬
|
23 |
-
entries.sort(key=lambda x: (x[0], x[1], x[2]))
|
24 |
-
|
25 |
-
# 3. new_pairs.jsonl에 기록
|
26 |
-
with open(output_path, 'w', encoding='utf-8') as outfile:
|
27 |
-
for _, _, _, entry in entries:
|
28 |
-
outfile.write(json.dumps(entry, ensure_ascii=False) + '\n')
|
29 |
-
|
30 |
-
if __name__ == "__main__":
|
31 |
-
sort_pairs_by_ai_image("pairs.jsonl", "new_pairs.jsonl")
|
32 |
-
print("✅ new_pairs.jsonl 생성 완료했어! 😉")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|