File size: 4,142 Bytes
75d81ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f7e6e59
75d81ee
a321e4c
75d81ee
03fbddc
 
 
 
 
75d81ee
ca4695a
 
4e85d82
f7e6e59
75d81ee
 
 
 
 
4e85d82
 
 
75d81ee
 
 
 
4e85d82
f7e6e59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0687917
 
f7e6e59
 
0687917
f7e6e59
 
 
 
 
 
 
 
 
 
0687917
f7e6e59
 
 
 
 
 
 
75d81ee
0687917
 
f7e6e59
 
 
 
 
 
 
ca4695a
 
 
75d81ee
 
ff4dcd1
 
 
 
 
75d81ee
f7e6e59
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md

language:
  - en
library_name: diffusers
pipeline_tag: text-to-image

tags:
- Text-to-Image
- ControlNet
- Diffusers
- Flux.1-dev
- image-generation
- Stable Diffusion
base_model: black-forest-labs/FLUX.1-dev
---

# FLUX.1-dev-ControlNet-Union-Pro

This repository contains a unified ControlNet for FLUX.1-dev model jointly released by researchers from [InstantX Team](https://huggingface.co/InstantX) and [Shakker Labs](https://huggingface.co/Shakker-Labs).

<div class="container">
  <img src="./assets/poster.png" width="1024"/>
</div>


# Model Cards
- This checkpoint is a Pro version of [FLUX.1-dev-Controlnet-Union](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union) trained with more steps and datasets.
- This model supports 7 control modes, including canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6).
- The recommended controlnet_conditioning_scale is 0.3-0.8.
- This model can be jointly used with other ControlNets.


# Showcases

<div class="container">
  <img src="./assets/teaser1.png" width="1024"/>
  <img src="./assets/teaser2.png" width="1024"/>
  <img src="./assets/teaser3.png" width="1024"/>
</div>


# Inference
Please install `diffusers` from [the source](https://github.com/huggingface/diffusers), as [the PR](https://github.com/huggingface/diffusers/pull/9175) has not been included in currently released version yet.

# Multi-Controls Inference
```python
import torch
from diffusers.utils import load_image

from diffusers import FluxControlNetPipeline, FluxControlNetModel, FluxMultiControlNetModel

base_model = 'black-forest-labs/FLUX.1-dev'
controlnet_model_union = './Shakker-Labs/FLUX.1-dev-Controlnet-Union'

controlnet_union = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16)
controlnet = FluxMultiControlNetModel([controlnet_union]) # we always recommend loading via FluxMultiControlNetModel

pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
pipe.to("cuda")

prompt = 'A bohemian-style female travel blogger with sun-kissed skin and messy beach waves.'
control_image_depth = load_image("https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro/resolve/main/assets/depth.jpg")
control_mode_depth = 2

control_image_canny = load_image("https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro/resolve/main/assets/canny.jpg")
control_mode_canny = 0

width, height = control_image.size

image = pipe(
    prompt, 
    control_image=[control_image_depth, control_image_canny],
    control_mode=[control_mode_depth, control_mode_canny],
    width=width,
    height=height,
    controlnet_conditioning_scale=[0.2, 0.4],
    num_inference_steps=24, 
    guidance_scale=3.5,
    generator=torch.manual_seed(42),
).images[0]
```

We also support loading multiple ControlNets as before, you can load as
```python
from diffusers import FluxControlNetModel, FluxMultiControlNetModel

controlnet_model_union = './Shakker-Labs/FLUX.1-dev-Controlnet-Union'
controlnet_union = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16)

controlnet_model_depth = './Shakker-Labs/FLUX.1-dev-Controlnet-Depth'
controlnet_depth = FluxControlNetModel.from_pretrained(controlnet_model_depth, torch_dtype=torch.bfloat16)

controlnet = FluxMultiControlNetModel([controlnet_union, controlnet_depth])

# set mode to None for other ControlNets
control_mode=[2, None]
```

# Resources
- [InstantX/FLUX.1-dev-Controlnet-Canny](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny)
- [Shakker-Labs/FLUX.1-dev-ControlNet-Depth](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth)
- [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro)

# Acknowledgements
This project is trained by [InstantX Team](https://huggingface.co/InstantX) and sponsored by [Shakker AI](https://www.shakker.ai/). All copyright reserved.