Datasets:

Modalities:
Text
Image
Formats:
parquet
Languages:
English
ArXiv:
Tags:
image
Libraries:
Datasets
Dask
License:
Dataset Viewer (First 5GB)
Auto-converted to Parquet
image_name
string
format
string
resolution
sequence
mode
string
image_data
unknown
model_name
string
nsfw_flag
bool
prompt
string
real_source
string
subset
string
split
string
label
int64
architecture
string
00000345.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nET92Y4m25IeiH1mttZy/4eIyMw9n72rzilWiWSzmyJ(...TRUNCATED)
alea31415/onimai-characters
false
a forest with a few trees
coco,forchheim,imagenet,imd2020,laion,landscapesHQ,vision
Systematic
train
1
LatDiff
00000231.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nOz9d5BkW3ofiH3fd865NzPLV3W1eW2e937MGz8DDxA(...TRUNCATED)
alea31415/onimai-characters
true
spicy bean paste 辣豆瓣酱, là dòubàn jiàng
LAION
Systematic
train
1
LatDiff
00000373.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nFz9a48sSbIkiImomrlHZOZ51KMf996dubOzS4BYgAs(...TRUNCATED)
alea31415/onimai-characters
false
a boy and his dog sitting on a bench
coco,forchheim,imagenet,imd2020,laion,landscapesHQ,vision
Systematic
train
1
LatDiff
00000206.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nOz9V5MkSZImCLIgxWrYHEZEwurs6Z3Z3dtZOqJ72O3(...TRUNCATED)
alea31415/onimai-characters
false
back cover for xiaomi redmi note 4
LAION
Systematic
train
1
LatDiff
00000259.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nOz9Z5wkV3YfiJ5zbpi05dtUW7TvhsfADDCYwXjPoSf(...TRUNCATED)
alea31415/onimai-characters
false
minimalist plus size look
LAION
Systematic
train
1
LatDiff
00000314.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nOz9eZPkOJInCv5+IM09IrKyqquqZ1r67dsnuyIrst/(...TRUNCATED)
alea31415/onimai-characters
false
a red wall
coco,forchheim,imagenet,imd2020,laion,landscapesHQ,vision
Systematic
train
1
LatDiff
00000076.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nOz9Z7At2XUeCK619s7MY6979z7/6pUDClVAoVAkQYA(...TRUNCATED)
alea31415/onimai-characters
false
black suede ankle boot
LAION
Systematic
train
1
LatDiff
00000378.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nOz9958lyXEnCJqZx3svdWZVlmzdDQ2QBDBDPTscqp2(...TRUNCATED)
alea31415/onimai-characters
false
a black and silver pot
coco,forchheim,imagenet,imd2020,laion,landscapesHQ,vision
Systematic
train
1
LatDiff
00000271.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nLT9WbMsSXImiH2fmkecc7fMrKwFhSpUoTZAGt0YNHt(...TRUNCATED)
alea31415/onimai-characters
false
property to rent in piedmont villa - cannobio, piedmont, italy
LAION
Systematic
train
1
LatDiff
00000110.png
PNG
[ 512, 512 ]
RGB
"iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nOT96fM1yZUehj3Pycyquvf+1nfvvRuNvQEMMBjMvnC(...TRUNCATED)
alea31415/onimai-characters
false
pirate fishing games wooden magnetic rods kids children boys girls
LAION
Systematic
train
1
LatDiff
End of preview. Expand in Data Studio

Community Forensics: Using Thousands of Generators to Train Fake Image Detectors (CVPR 2025)

Paper/Project Page

This is a small version of the Community Forensics dataset. It contains roughly 11% of the generated images of the base dataset and is paired with real data with redistributable license. This dataset is intended for easier prototyping as you do not have to download the corresponding real datasets separately.

We distribute this dataset with a cc-nc-by-sa-4.0 license for non-commercial research purposes only.

The following table shows the performance (AP) difference between the classifier trained on the base dataset and this version of the dataset:

Version GAN Lat. Diff. Pix. Diff. Commercial Other Mean
Base 0.995 0.996 0.947 0.985 0.998 0.984
Small 0.986 0.995 0.888 0.852 0.993 0.943

Dataset Summary

  • The Community Forensics (small) dataset is intended for developing and benchmarking forensics methods that detect or analyze AI-generated images. It contains 278K generated images collected from 4803 generator models, and paired with 278K "real" images, sourced from FFHQ, VISION, COCO, and Landscapes HQ datasets.

Supported Tasks

  • Image Classification: identify whether the given image is AI-generated. We mainly study this task in our paper, but other tasks may be possible with our dataset.

Dataset Structure

Data Instances

Our dataset is formatted in a Parquet data frame of the following structure:

{
  "image_name": "00000162.png", 
  "format": "PNG",
  "resolution": "[512, 512]", 
  "mode": "RGB",
  "image_data": "b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\..." 
  "model_name": "stabilityai/stable-diffusion-2", 
  "nsfw_flag": False,
  "prompt": "montreal grand prix 2018 von icrdesigns",
  "real_source": "LAION",
  "subset": "Systematic",
  "split": "train",
  "label": "1",
  "architecture": "LatDiff"
}

Data Fields

image_name: Filename of an image.
format: PIL image format.
resolution: Image resolution.
mode: PIL image mode (e.g., RGB)
image_data: Image data in byte format. Can be read using Python's BytesIO.
model_name: Name of the model used to sample this image. Has format {author_name}/{model_name} for Systematic subset, and {model_name} for other subsets.
nsfw_flag: NSFW flag determined using Stable Diffusion Safety Checker.
prompt: Input prompt (if exists).
real_source: Paired real dataset(s) that was used to source the prompts or to train the generators.
subset: Denotes which subset the image belongs to (Systematic: Hugging Face models, Manual: manually downloaded models, Commercial: commercial models).
split: Train/test split.
label: Fake/Real label. (1: Fake, 0: Real)
architecture: Architecture of the generative model that is used to generate this image. (Categories: LatDiff, PixDiff, GAN, other, real)

Data splits

train: Default split containing the paired dataset (278K real and 278K generated images).

Usage examples

Default train/eval settings:

import datasets as ds
import PIL.Image as Image
import io

# default training set
commfor_small_train = ds.load_dataset("OwensLab/CommunityForensics-Small", split="train", cache_dir="~/.cache/huggingface/datasets")

# optionally shuffle the dataset
commfor_small_train = commfor_small_train.shuffle(seed=123, writer_batch_size=3000)

for i, data in enumerate(commfor_small_train):
  img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
  ## Your operations here ##
  # e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)

Note:

  • Downloading and indexing the data can take some time, but only for the first time. Downloading may use up to ~600GB (278GB data + 278GB re-indexed arrow files)
  • It is possible to randomly access data by passing an index (e.g., commfor_small_train[10], commfor_small_train[247]).
  • You can set cache_dir to some other directory if your home directory is limited. By default, it will download data to ~/.cache/huggingface/datasets.

It is also possible to use streaming for some use cases (e.g., downloading only a certain subset or a small portion of data).

import datasets as ds
import PIL.Image as Image
import io

# steaming only the systematic set. Note that when streaming, you can only load specific splits
commfor_train_stream = ds.load_dataset("OwensLab/CommunityForensics-Small", split='train', streaming=True)

# optionally shuffle the streaming dataset
commfor_train_stream = commfor_train_stream.shuffle(seed=123, buffer_size=3000)

# usage example
for i, data in enumerate(commfor_train_stream):
  if i>=10000: # use only first 10000 samples
    break
  img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
  ## Your operations here ##
  # e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)
  

Please check Hugging Face documentation for more usage examples.

Below is the dataset card of the base dataset with minor modifications.

Dataset Creation

Curation Rationale

This dataset is created to address the limited model diversity of the existing datasets for generated image detection. While some existing datasets contain millions of images, they are typically sampled from handful of generator models. We instead sample 2.7M images from 4803 generator models, approximately 34 times more generators than the most extensive previous dataset that we are aware of.

This is the "small" version of the dataset which contains approximately 11% of the base dataset (278K generated images) which are then paired with 278K "real" images for easier prototyping.

Collection Methodology

We collect generators in three different subgroups. (1) We systematically download and sample open source latent diffusion models from Hugging Face. (2) We manually sample open source generators with various architectures and training procedures. (3) We sample from both open and closed commercially available generators.

Personal and Sensitive Information

The dataset does not contain any sensitive identifying information (i.e., does not contain data that reveals information such as racial or ethnic origin, sexual orientation, religious or political beliefs).

Considerations of Using the Data

Social Impact of Dataset

This dataset may be useful for researchers in developing and benchmarking forensics methods. Such methods may aid users in better understanding the given image. However, we believe the classifiers, at least the ones that we have trained or benchmarked, still show far too high error rates to be used directly in the wild, and can lead to unwanted consequences (e.g., falsely accusing an author of creating fake images or allowing generated content to be certified as real).

Discussion of Biases

The dataset has been primarily sampled from LAION captions. This may introduce biases that could be present in web-scale data (e.g., favoring human photos instead of other categories of photos). In addition, a vast majority of the generators we collect are derivatives of Stable Diffusion, which may introduce bias towards detecting certain types of generators.

Other Known Limitations

The generative models are sourced from the community and may contain inappropriate content. While in many contexts it is important to detect such images, these generated images may require further scrutiny before being used in other downstream applications.

Additional Information

Acknowledgement

We thank the creators of the many open source models that we used to collect the Community Forensics dataset. We thank Chenhao Zheng, Cameron Johnson, Matthias Kirchner, Daniel Geng, Ziyang Chen, Ayush Shrivastava, Yiming Dou, Chao Feng, Xuanchen Lu, Zihao Wei, Zixuan Pan, Inbum Park, Rohit Banerjee, and Ang Cao for the valuable discussions and feedback. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0123.

Licensing Information

We release the dataset with a cc-by-nc-sa-4.0 license for research purposes only. In addition, we note that each image in this dataset has been generated by the models with their respective licenses. We therefore provide metadata of all models present in our dataset with their license information. A vast majority of the generators use the CreativeML OpenRAIL-M license. Please refer to the metadata for detailed licensing information for your specific application.

Citation Information

@InProceedings{Park_2025_CVPR,
    author    = {Park, Jeongsoo and Owens, Andrew},
    title     = {Community Forensics: Using Thousands of Generators to Train Fake Image Detectors},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {8245-8257}
}
Downloads last month
79