|
# BrushNet |
|
|
|
This repository contains a fork of the implementation of the paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion" that was used for generating PinPoint counterfactuals. |
|
Please, refer to the <a href="https://tencentarc.github.io/BrushNet/">Original Project</a> for details on the environment installation and a complete description of BrushNet's features. |
|
|
|
|
|
|
|
|
|
## Getting Started |
|
|
|
### Environment Requirement π |
|
|
|
BrushNet has been implemented and tested on Pytorch 1.12.1 with python 3.9. |
|
|
|
Clone the repo: |
|
|
|
``` |
|
git clone https://github.com/TencentARC/BrushNet.git |
|
``` |
|
|
|
We recommend you first use `conda` to create virtual environment, and install `pytorch` following [official instructions](https://pytorch.org/). For example: |
|
|
|
|
|
``` |
|
conda create -n diffusers python=3.9 -y |
|
conda activate diffusers |
|
python -m pip install --upgrade pip |
|
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116 |
|
``` |
|
|
|
Then, you can install diffusers (implemented in this repo) with: |
|
|
|
``` |
|
pip install -e . |
|
``` |
|
|
|
After that, you can install required packages thourgh: |
|
|
|
``` |
|
cd examples/brushnet/ |
|
pip install -r requirements.txt |
|
``` |
|
|
|
### Data Download β¬οΈ |
|
|
|
|
|
**Dataset** |
|
|
|
After downloading the datasets (CC3M: https://ai.google.com/research/ConceptualCaptions/download and FACET https://ai.meta.com/datasets/facet-downloads/), edit the paths in |
|
`./examples/brushnet/inpaint_cc3m.py`, and |
|
`./examples/brushnet/inpaint_facet.py`. |
|
|
|
**Checkpoints** |
|
|
|
Checkpoints of BrushNet can be downloaded from [here](https://drive.google.com/drive/folders/1fqmS1CEOvXCxNWFrsSYd_jHYXxrydh1n?usp=drive_link). The ckpt folder needs to contain pretrained checkpoints and pretrained Stable Diffusion checkpoint (realisticVisionV60B1_v51VAE from [Civitai](https://civitai.com/)). The data structure should be like: |
|
|
|
``` |
|
|-- data |
|
|-- BrushData |
|
|-- BrushDench |
|
|-- EditBench |
|
|-- ckpt |
|
|-- realisticVisionV60B1_v51VAE |
|
|-- model_index.json |
|
|-- vae |
|
|-- ... |
|
|-- segmentation_mask_brushnet_ckpt |
|
``` |
|
|
|
|
|
|
|
## ππΌ Running Scripts |
|
|
|
### Inference π |
|
|
|
You can in-paint FACET with the script: |
|
|
|
``` |
|
python examples/brushnet/inpaint_facet.py |
|
``` |
|
|
|
You can experiment with different prompts for in-painting different values of protected attributes. For in-painting PP*, WB*, and C&A* setups, change the corresponding prompts to `f"A photo of a man/woman who is a {category}."`, otherwise leave it as `f"A photo of a man/woman."` |
|
|
|
For in-paintinf CC3M use the script: |
|
|
|
``` |
|
python examples/brushnet/inpaint_cc3m.py |
|
``` |
|
|
|
|
|
### Evaluation π |
|
|
|
You can evaluate the image realism using the following script, first installing the required dependencies: |
|
|
|
``` |
|
python examples/brushnet/evaluate_dir.py |
|
``` |
|
|
|
|
|
|
|
Make sure to set the paths to the image directories before running the script. |
|
|
|
|
|
|
|
Note that the evaluation script requires the following additional dependencies: |
|
- open_clip |
|
- hpsv2 |
|
- ImageReward |
|
- CLIPScore |