File size: 1,486 Bytes
dae299e
cdc3446
79293b9
 
da12324
79293b9
cdc3446
 
587e910
cdc3446
 
 
 
 
 
 
 
 
da12324
cdc3446
 
cc28a24
da12324
 
 
 
 
 
 
cc28a24
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)

**Description**: This repository contains the dataset for the D3PO method in this paper [Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model](https://arxiv.org/abs/2311.13231). The *d3po_dataset* file pertains to the image distortion experiment of the [`anything-v5`](https://huggingface.co/stablediffusionapi/anything-v5) model.
                The *text2img_dataset* comprises the images generated from the pretrained, preferred image fine-tuned, reward weighted fine-tuned and D3PO fine-tuned models in the prompt-image alignment experiment.
                
**Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/).

**Directory**
- d3po_dataset
    - epoch1
        - all_img
          - *.png
        - deformed_img
          - *.png
        - json
          - data.json (required for training)
        - prompt.json
        - sample.pkl(required for training)
    - epoch2`
    - ...
    - epoch5

  
- text2img_dataset:
  - img
  - data_*.json
  - plot.ipynb
  - prompt.txt

**Citation**
```
@article{yang2023using,
  title={Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model},
  author={Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Li, Qimai and Shen, Weihan and Zhu, Xiaolong and Li, Xiu},
  journal={arXiv preprint arXiv:2311.13231},
  year={2023}
}
```