DegMaTsu commited on
Commit
5471086
·
verified ·
1 Parent(s): 50e6701

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -108
README.md CHANGED
@@ -1,7 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
  <div align="center">
2
 
3
  # ComfyUI
4
- **The most powerful and modular visual AI engine and application.**
5
 
6
 
7
  [![Website][website-shield]][website-url]
@@ -31,48 +43,17 @@
31
  ![ComfyUI Screenshot](https://github.com/user-attachments/assets/7ccaf2c1-9b72-41ae-9a89-5688c94b7abe)
32
  </div>
33
 
34
- ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Available on Windows, Linux, and macOS.
35
-
36
- ## Get Started
37
-
38
- #### [Desktop Application](https://www.comfy.org/download)
39
- - The easiest way to get started.
40
- - Available on Windows & macOS.
41
-
42
- #### [Windows Portable Package](#installing)
43
- - Get the latest commits and completely portable.
44
- - Available on Windows.
45
-
46
- #### [Manual Install](#manual-install-windows-linux)
47
- Supports all operating systems and GPU types (NVIDIA, AMD, Intel, Apple Silicon, Ascend).
48
-
49
- ## [Examples](https://comfyanonymous.github.io/ComfyUI_examples/)
50
- See what ComfyUI can do with the [example workflows](https://comfyanonymous.github.io/ComfyUI_examples/).
51
 
 
52
 
53
  ## Features
54
  - Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
55
- - Image Models
56
- - SD1.x, SD2.x,
57
- - [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
58
- - [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/)
59
- - [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)
60
- - Pixart Alpha and Sigma
61
- - [AuraFlow](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/)
62
- - [HunyuanDiT](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/)
63
- - [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
64
- - [Lumina Image 2.0](https://comfyanonymous.github.io/ComfyUI_examples/lumina2/)
65
- - [HiDream](https://comfyanonymous.github.io/ComfyUI_examples/hidream/)
66
- - Video Models
67
- - [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/)
68
- - [Mochi](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
69
- - [LTX-Video](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/)
70
- - [Hunyuan Video](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/)
71
- - [Nvidia Cosmos](https://comfyanonymous.github.io/ComfyUI_examples/cosmos/)
72
- - [Wan 2.1](https://comfyanonymous.github.io/ComfyUI_examples/wan/)
73
- - 3D Models
74
- - [Hunyuan3D 2.0](https://docs.comfy.org/tutorials/3d/hunyuan3D-2)
75
- - [Stable Audio](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
76
  - Asynchronous Queue system
77
  - Many optimizations: Only re-executes the parts of the workflow that changes between executions.
78
  - Smart memory management: can automatically run models on GPUs with as low as 1GB vram.
@@ -92,6 +73,9 @@ See what ComfyUI can do with the [example workflows](https://comfyanonymous.gith
92
  - [GLIGEN](https://comfyanonymous.github.io/ComfyUI_examples/gligen/)
93
  - [Model Merging](https://comfyanonymous.github.io/ComfyUI_examples/model_merging/)
94
  - [LCM models and Loras](https://comfyanonymous.github.io/ComfyUI_examples/lcm/)
 
 
 
95
  - Latent previews with [TAESD](#how-to-show-high-quality-previews)
96
  - Starts up very fast.
97
  - Works fully offline: will never download anything.
@@ -129,8 +113,6 @@ Workflow examples can be found on the [Examples page](https://comfyanonymous.git
129
  | `Q` | Toggle visibility of the queue |
130
  | `H` | Toggle visibility of history |
131
  | `R` | Refresh graph |
132
- | `F` | Show/Hide menu |
133
- | `.` | Fit view to selection (Whole graph when nothing is selected) |
134
  | Double-Click LMB | Open node quick search palette |
135
  | `Shift` + Drag | Move multiple wires at once |
136
  | `Ctrl` + `Alt` + LMB | Disconnect all wires from clicked slot |
@@ -139,7 +121,7 @@ Workflow examples can be found on the [Examples page](https://comfyanonymous.git
139
 
140
  # Installing
141
 
142
- ## Windows Portable
143
 
144
  There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).
145
 
@@ -149,8 +131,6 @@ Simply download, extract with [7-Zip](https://7-zip.org) and run. Make sure you
149
 
150
  If you have trouble extracting it, right click the file -> properties -> unblock
151
 
152
- If you have a 50 series Blackwell card like a 5090 or 5080 see [this discussion thread](https://github.com/comfyanonymous/ComfyUI/discussions/6643)
153
-
154
  #### How do I share models between another UI and ComfyUI?
155
 
156
  See the [Config file](extra_model_paths.yaml.example) to set the search paths for models. In the standalone windows build you can find this file in the ComfyUI directory. Rename this file to extra_model_paths.yaml and edit it with your favorite text editor.
@@ -159,18 +139,9 @@ See the [Config file](extra_model_paths.yaml.example) to set the search paths fo
159
 
160
  To run it on services like paperspace, kaggle or colab you can use my [Jupyter Notebook](notebooks/comfyui_colab.ipynb)
161
 
162
-
163
- ## [comfy-cli](https://docs.comfy.org/comfy-cli/getting-started)
164
-
165
- You can install and start ComfyUI using comfy-cli:
166
- ```bash
167
- pip install comfy-cli
168
- comfy install
169
- ```
170
-
171
  ## Manual Install (Windows, Linux)
172
 
173
- python 3.13 is supported but using 3.12 is recommended because some custom nodes and their dependencies might not support it yet.
174
 
175
  Git clone this repo.
176
 
@@ -182,45 +153,21 @@ Put your VAE in: models/vae
182
  ### AMD GPUs (Linux only)
183
  AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:
184
 
185
- ```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2.4```
186
-
187
- This is the command to install the nightly with ROCm 6.3 which might have some performance improvements:
188
-
189
- ```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.3```
190
-
191
- ### Intel GPUs (Windows and Linux)
192
-
193
- (Option 1) Intel Arc GPU users can install native PyTorch with torch.xpu support using pip (currently available in PyTorch nightly builds). More information can be found [here](https://pytorch.org/docs/main/notes/get_start_xpu.html)
194
-
195
- 1. To install PyTorch nightly, use the following command:
196
-
197
- ```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu```
198
-
199
- 2. Launch ComfyUI by running `python main.py`
200
-
201
-
202
- (Option 2) Alternatively, Intel GPUs supported by Intel Extension for PyTorch (IPEX) can leverage IPEX for improved performance.
203
 
204
- 1. For Intel® Arc™ A-Series Graphics utilizing IPEX, create a conda environment and use the commands below:
205
 
206
- ```
207
- conda install libuv
208
- pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
209
- ```
210
-
211
- For other supported Intel GPUs with IPEX, visit [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) for more information.
212
-
213
- Additional discussion and help can be found [here](https://github.com/comfyanonymous/ComfyUI/discussions/476).
214
 
215
  ### NVIDIA
216
 
217
  Nvidia users should install stable pytorch using this command:
218
 
219
- ```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu126```
220
 
221
- This is the command to install pytorch nightly instead which supports the new blackwell 50xx series GPUs and might have performance improvements.
222
 
223
- ```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128```
224
 
225
  #### Troubleshooting
226
 
@@ -240,6 +187,17 @@ After this you should have everything installed and can proceed to running Comfy
240
 
241
  ### Others:
242
 
 
 
 
 
 
 
 
 
 
 
 
243
  #### Apple Mac silicon
244
 
245
  You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.
@@ -255,23 +213,6 @@ You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS ve
255
 
256
  ```pip install torch-directml``` Then you can launch ComfyUI with: ```python main.py --directml```
257
 
258
- #### Ascend NPUs
259
-
260
- For models compatible with Ascend Extension for PyTorch (torch_npu). To get started, ensure your environment meets the prerequisites outlined on the [installation](https://ascend.github.io/docs/sources/ascend/quick_install.html) page. Here's a step-by-step guide tailored to your platform and installation method:
261
-
262
- 1. Begin by installing the recommended or newer kernel version for Linux as specified in the Installation page of torch-npu, if necessary.
263
- 2. Proceed with the installation of Ascend Basekit, which includes the driver, firmware, and CANN, following the instructions provided for your specific platform.
264
- 3. Next, install the necessary packages for torch-npu by adhering to the platform-specific instructions on the [Installation](https://ascend.github.io/docs/sources/pytorch/install.html#pytorch) page.
265
- 4. Finally, adhere to the [ComfyUI manual installation](#manual-install-windows-linux) guide for Linux. Once all components are installed, you can run ComfyUI as described earlier.
266
-
267
- #### Cambricon MLUs
268
-
269
- For models compatible with Cambricon Extension for PyTorch (torch_mlu). Here's a step-by-step guide tailored to your platform and installation method:
270
-
271
- 1. Install the Cambricon CNToolkit by adhering to the platform-specific instructions on the [Installation](https://www.cambricon.com/docs/sdk_1.15.0/cntoolkit_3.7.2/cntoolkit_install_3.7.2/index.html)
272
- 2. Next, install the PyTorch(torch_mlu) following the instructions on the [Installation](https://www.cambricon.com/docs/sdk_1.15.0/cambricon_pytorch_1.17.0/user_guide_1.9/index.html)
273
- 3. Launch ComfyUI by running `python main.py`
274
-
275
  # Running
276
 
277
  ```python main.py```
@@ -290,8 +231,6 @@ You can enable experimental memory efficient attention on pytorch 2.5 in ComfyUI
290
 
291
  ```TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention```
292
 
293
- You can also try setting this env variable `PYTORCH_TUNABLEOP_ENABLED=1` which might speed things up at the cost of a very slow initial run.
294
-
295
  # Notes
296
 
297
  Only parts of the graph that have an output with all the correct inputs will be executed.
@@ -327,8 +266,6 @@ Use `--tls-keyfile key.pem --tls-certfile cert.pem` to enable TLS/SSL, the app w
327
 
328
  ## Support and dev channel
329
 
330
- [Discord](https://comfy.org/discord): Try the #help or #feedback channels.
331
-
332
  [Matrix space: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) (it's like discord but open source).
333
 
334
  See also: [https://www.comfy.org/](https://www.comfy.org/)
@@ -345,7 +282,7 @@ For any bugs, issues, or feature requests related to the frontend, please use th
345
 
346
  The new frontend is now the default for ComfyUI. However, please note:
347
 
348
- 1. The frontend in the main ComfyUI repository is updated fortnightly.
349
  2. Daily releases are available in the separate frontend repository.
350
 
351
  To use the most up-to-date frontend version:
@@ -362,7 +299,7 @@ To use the most up-to-date frontend version:
362
  --front-end-version Comfy-Org/ComfyUI_frontend@1.2.2
363
  ```
364
 
365
- This approach allows you to easily switch between the stable fortnightly release and the cutting-edge daily updates, or even specific versions for testing purposes.
366
 
367
  ### Accessing the Legacy Frontend
368
 
@@ -378,4 +315,4 @@ This will use a snapshot of the legacy frontend preserved in the [ComfyUI Legacy
378
 
379
  ### Which GPU should I buy for this?
380
 
381
- [See this page for some recommendations](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI)
 
1
+ ---
2
+ title: Flux Style Shaping
3
+ emoji: 🚀
4
+ colorFrom: indigo
5
+ colorTo: gray
6
+ sdk: gradio
7
+ sdk_version: 5.12.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ short_description: Optical illusions and style transfer with FLUX
12
+ ---
13
  <div align="center">
14
 
15
  # ComfyUI
16
+ **The most powerful and modular diffusion model GUI and backend.**
17
 
18
 
19
  [![Website][website-shield]][website-url]
 
43
  ![ComfyUI Screenshot](https://github.com/user-attachments/assets/7ccaf2c1-9b72-41ae-9a89-5688c94b7abe)
44
  </div>
45
 
46
+ This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. For some workflow examples and see what ComfyUI can do you can check out:
47
+ ### [ComfyUI Examples](https://comfyanonymous.github.io/ComfyUI_examples/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
+ ### [Installing ComfyUI](#installing)
50
 
51
  ## Features
52
  - Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
53
+ - Fully supports SD1.x, SD2.x, [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/), [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/), [SD3](https://comfyanonymous.github.io/ComfyUI_examples/sd3/) and [Stable Audio](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
54
+ - [LTX-Video](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/)
55
+ - [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
56
+ - [Mochi](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  - Asynchronous Queue system
58
  - Many optimizations: Only re-executes the parts of the workflow that changes between executions.
59
  - Smart memory management: can automatically run models on GPUs with as low as 1GB vram.
 
73
  - [GLIGEN](https://comfyanonymous.github.io/ComfyUI_examples/gligen/)
74
  - [Model Merging](https://comfyanonymous.github.io/ComfyUI_examples/model_merging/)
75
  - [LCM models and Loras](https://comfyanonymous.github.io/ComfyUI_examples/lcm/)
76
+ - [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
77
+ - [AuraFlow](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/)
78
+ - [HunyuanDiT](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/)
79
  - Latent previews with [TAESD](#how-to-show-high-quality-previews)
80
  - Starts up very fast.
81
  - Works fully offline: will never download anything.
 
113
  | `Q` | Toggle visibility of the queue |
114
  | `H` | Toggle visibility of history |
115
  | `R` | Refresh graph |
 
 
116
  | Double-Click LMB | Open node quick search palette |
117
  | `Shift` + Drag | Move multiple wires at once |
118
  | `Ctrl` + `Alt` + LMB | Disconnect all wires from clicked slot |
 
121
 
122
  # Installing
123
 
124
+ ## Windows
125
 
126
  There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).
127
 
 
131
 
132
  If you have trouble extracting it, right click the file -> properties -> unblock
133
 
 
 
134
  #### How do I share models between another UI and ComfyUI?
135
 
136
  See the [Config file](extra_model_paths.yaml.example) to set the search paths for models. In the standalone windows build you can find this file in the ComfyUI directory. Rename this file to extra_model_paths.yaml and edit it with your favorite text editor.
 
139
 
140
  To run it on services like paperspace, kaggle or colab you can use my [Jupyter Notebook](notebooks/comfyui_colab.ipynb)
141
 
 
 
 
 
 
 
 
 
 
142
  ## Manual Install (Windows, Linux)
143
 
144
+ Note that some dependencies do not yet support python 3.13 so using 3.12 is recommended.
145
 
146
  Git clone this repo.
147
 
 
153
  ### AMD GPUs (Linux only)
154
  AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:
155
 
156
+ ```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
 
158
+ This is the command to install the nightly with ROCm 6.2 which might have some performance improvements:
159
 
160
+ ```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.2```
 
 
 
 
 
 
 
161
 
162
  ### NVIDIA
163
 
164
  Nvidia users should install stable pytorch using this command:
165
 
166
+ ```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124```
167
 
168
+ This is the command to install pytorch nightly instead which might have performance improvements:
169
 
170
+ ```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124```
171
 
172
  #### Troubleshooting
173
 
 
187
 
188
  ### Others:
189
 
190
+ #### Intel GPUs
191
+
192
+ Intel GPU support is available for all Intel GPUs supported by Intel's Extension for Pytorch (IPEX) with the support requirements listed in the [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) page. Choose your platform and method of install and follow the instructions. The steps are as follows:
193
+
194
+ 1. Start by installing the drivers or kernel listed or newer in the Installation page of IPEX linked above for Windows and Linux if needed.
195
+ 1. Follow the instructions to install [Intel's oneAPI Basekit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for your platform.
196
+ 1. Install the packages for IPEX using the instructions provided in the Installation page for your platform.
197
+ 1. Follow the [ComfyUI manual installation](#manual-install-windows-linux) instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed.
198
+
199
+ Additional discussion and help can be found [here](https://github.com/comfyanonymous/ComfyUI/discussions/476).
200
+
201
  #### Apple Mac silicon
202
 
203
  You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.
 
213
 
214
  ```pip install torch-directml``` Then you can launch ComfyUI with: ```python main.py --directml```
215
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
216
  # Running
217
 
218
  ```python main.py```
 
231
 
232
  ```TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention```
233
 
 
 
234
  # Notes
235
 
236
  Only parts of the graph that have an output with all the correct inputs will be executed.
 
266
 
267
  ## Support and dev channel
268
 
 
 
269
  [Matrix space: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) (it's like discord but open source).
270
 
271
  See also: [https://www.comfy.org/](https://www.comfy.org/)
 
282
 
283
  The new frontend is now the default for ComfyUI. However, please note:
284
 
285
+ 1. The frontend in the main ComfyUI repository is updated weekly.
286
  2. Daily releases are available in the separate frontend repository.
287
 
288
  To use the most up-to-date frontend version:
 
299
  --front-end-version Comfy-Org/ComfyUI_frontend@1.2.2
300
  ```
301
 
302
+ This approach allows you to easily switch between the stable weekly release and the cutting-edge daily updates, or even specific versions for testing purposes.
303
 
304
  ### Accessing the Legacy Frontend
305
 
 
315
 
316
  ### Which GPU should I buy for this?
317
 
318
+ [See this page for some recommendations](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI)