Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Stable Diffusion WebUI Forge - Classic

UI

Stable Diffusion WebUI Forge is a platform on top of the original Stable Diffusion WebUI by AUTOMATIC1111, to make development easier, optimize resource management, speed up inference, and study experimental features.
The name "Forge" is inspired by "Minecraft Forge". This project aims to become the Forge of Stable Diffusion WebUI.

- lllyasviel
(paraphrased)


"Classic" mainly serves as an archive for the "previous" version of Forge, which was built on Gradio 3.41.2 before the major changes (see the original announcement) were introduced. Additionally, this fork is focused exclusively on SD1 and SDXL checkpoints, having various optimizations implemented, with the main goal of being the lightest WebUI without any bloatwares.

How to Install


Features [May. 21]

Most base features of the original Automatic1111 Webui should still function

New Features

  • Support uv package manager
    • requires manually installing uv
    • drastically speed up installation
    • see Commandline
  • Support SageAttention
  • Support FlashAttention
  • Support fast fp16_accumulation
    • requires PyTorch 2.7.0 +
    • ~25% speed up
    • see Commandline
  • Support fast cublas operation (CublasLinear)
  • Support fast fp8 operation (torch._scaled_mm)
    • requires RTX 40 +
    • ~10% speed up; reduce quality
    • enable in Settings/Optimizations

  • Both fp16_accumulation and cublas_ops achieve the same speed up; if you already install/update to PyTorch 2.7.0, you do not need to go for cublas_ops
  • The fp16_accumulation and cublas_ops require fp16 precision, thus is not compatible with the fp8 operation
  • Implement new Samplers
    • (ported from reForge Webui)
  • Implement Scheduler Dropdown
    • (backported from Automatic1111 Webui upstream)
    • enable in Settings/UI alternatives
  • Implement RescaleCFG
    • reduce burnt colors; mainly for v-pred checkpoints
    • enable in Settings/UI alternatives
  • Implement MaHiRo
    • alternative CFG calculation; improve prompt adherence
    • enable in Settings/UI alternatives
  • Implement diskcache for hashes
    • (backported from Automatic1111 Webui upstream)
  • Implement skip_early_cond
    • (backported from Automatic1111 Webui upstream)
    • enable in Settings/Optimizations
  • Support v-pred SDXL checkpoints (eg. NoobAI)
  • Support new LoRA architectures
  • Update spandrel
    • support new Upscaler architectures
  • Add pillow-heif package
    • support .avif and .heif images
  • Automatically determine the optimal row count for X/Y/Z Plot
  • DepthAnything v2 Preprocessor
  • Support NoobAI Inpaint ControlNet
  • Support Union / ProMax ControlNet
    • they simply always show up in the dropdown

Removed Features

  • SD2
  • Alt-Diffusion
  • Instruct-Pix2Pix
  • Hypernetworks
  • SVD
  • Z123
  • CLIP Interrogator
  • Deepbooru Interrogator
  • Textual Inversion Training
  • Checkpoint Merging
  • LDSR
  • Most built-in Extensions
  • Some built-in Scripts
  • Some Samplers
  • Sampler in RadioGroup
  • test scripts
  • Some Preprocessors (ControlNet)
  • Photopea and openpose_editor (ControlNet)
  • Unix .sh launch scripts
    • You can still use this WebUI by copying a launch script from another working WebUI; I just don't want to maintain them...

Optimizations

  • [Freedom] Natively integrate the SD1 and SDXL logics
    • no longer git clone any repository on fresh install
    • no more random hacks and monkey patches
  • Fix memory leak when switching checkpoints
  • Clean up the ldm_patched (ie. comfy) folder
  • Remove unused cmd_args
  • Remove unused args_parser
  • Remove unused shared_options
  • Remove legacy codes
  • Remove duplicated upscaler codes
    • put every upscaler inside the ESRGAN folder
    • optimize upscaler logics
  • Improve color correction
  • Improve hash caching
  • Improve error logs
    • no longer just print TypeError: 'NoneType' object is not iterable
  • Revamp settings
    • improve formatting
    • update descriptions
  • Check for Extension updates in parallel
  • Moved embeddings folder into models folder
  • ControlNet Rewrite
    • change Units to gr.Tab
    • remove multi-inputs, as they are "misleading"
    • change visible toggle to interactive toggle; now the UI will no longer jump around
    • improved Presets application
  • Disable Refiner by default
    • enable again in Settings/UI alternatives
  • Disable Tree View by default
    • enable again in Settings/Extra Networks
  • Run text encoder on CPU by default
  • Fix pydantic Errors
  • Fix Soft Inpainting
  • Lint & Format
  • Update Pillow
    • faster image processing
  • Update protobuf
    • faster insightface loading
  • Update to latest PyTorch
    • torch==2.7.0+cu128
    • xformers==0.0.30

If your GPU does not support the latest PyTorch, manually install older version of PyTorch

  • No longer install open-clip twice
  • Update some packages to newer versions
  • Update recommended Python to 3.11.9
  • many more... :tm:

Commandline

These flags can be added after the set COMMANDLINE_ARGS= line in the webui-user.bat (separate each flag with space)

A1111 built-in

  • --no-download-sd-model: Do not download a default checkpoint
    • can be removed after you download some checkpoints of your choice
  • --xformers: Install the xformers package to speed up generation
    • Currently, torch==2.7.0 does not support xformers yet
  • --port: Specify a server port to use
    • defaults to 7860
  • --api: Enable API access

  • Once you have successfully launched the WebUI, you can add the following flags to bypass some validation steps in order to improve the Startup time
    • --skip-prepare-environment
    • --skip-install
    • --skip-python-version-check
    • --skip-torch-cuda-test
    • --skip-version-check

Remove them if you are installing an Extension, as those also block Extension from installing requirements

by. Forge

  • For RTX 30 and above, you can add the following flags to slightly increase the performance; but in rare occurrences, they may cause OutOfMemory errors or even crash the WebUI; and in certain configurations, they may even lower the speed instead
    • --cuda-malloc
    • --cuda-stream
    • --pin-shared-memory

by. Classic

  • --uv: Replace the python -m pip calls with uv pip to massively speed up package installation
  • --uv-symlink: Same as above; but additionally pass --link-mode symlink to the commands
    • significantly reduces installation size (~7 GB to ~100 MB)

Using symlink means it will directly access the packages from the cache folders; refrain from clearing the cache when setting this option

  • --model-ref: Points to a central models folder that contains all your models
    • said folder should contain subfolders like Stable-diffusion, Lora, VAE, ESRGAN, etc.

This simply replaces the models folder, rather than adding on top of it

  • --fast-fp16: Enable the allow_fp16_accumulation option
    • requires PyTorch 2.7.0 +
  • --sage: Install the sageattention package to speed up generation
    • requires triton
    • requires RTX 30 +
    • only affects SDXL

For RTX 50 users, you may need to manually install sageattention 2 instead

with SageAttention 2
  • --sageattn2-api: Select the function used by SageAttention 2
    • options:
      • auto (default)
      • triton-fp16
      • cuda-fp16
      • cuda-fp8
    • try the fp16 options if you get NaN (black images) on auto

Installation

  1. Install git

  2. Clone the Repo

    git clone https://github.com/Haoming02/sd-webui-forge-classic
    
  3. Setup Python

Recommended Method
  • Install uv
  • Set up venv
    cd sd-webui-forge-classic
    uv venv venv --python 3.11 --seed
    
  • Add the --uv flag to webui-user.bat
Standard Method
  1. (Optional) Configure Commandline
  2. Launch the WebUI via webui-user.bat
  3. During the first launch, it will automatically install all the requirements
  4. Once the installation is finished, the WebUI will start in a browser automatically

Install cublas

Expand
  1. Ensure the WebUI can properly launch already, by following the installation steps first

  2. Open the console in the WebUI directory

    cd sd-webui-forge-classic
    
  3. Start the virtual environment

    venv\scripts\activate
    
  4. Create a new folder

    mkdir repo
    cd repo
    
  5. Clone the repo

    git clone https://github.com/aredden/torch-cublas-hgemm
    cd torch-cublas-hgemm
    
  6. Install the library

    pip install -e . --no-build-isolation
    
    • If you installed uv, use uv pip install instead
    • The installation takes a few minutes

Install triton

Expand
  1. Ensure the WebUI can properly launch already, by following the installation steps first
  2. Open the console in the WebUI directory
    cd sd-webui-forge-classic
    
  3. Start the virtual environment
    venv\scripts\activate
    
  4. Install the library
    • Windows
      pip install triton-windows
      
    • Linux
      pip install triton
      
    • If you installed uv, use uv pip install instead

Install flash-attn

Expand
  1. Ensure the WebUI can properly launch already, by following the installation steps first
  2. Open the console in the WebUI directory
    cd sd-webui-forge-classic
    
  3. Start the virtual environment
    venv\scripts\activate
    
  4. Install the library

Install sageattention 2

Expand
  1. Ensure the WebUI can properly launch already, by following the installation steps first

  2. Open the console in the WebUI directory

    cd sd-webui-forge-classic
    
  3. Start the virtual environment

    venv\scripts\activate
    
  4. Create a new folder

    mkdir repo
    cd repo
    
  5. Clone the repo

    git clone https://github.com/thu-ml/SageAttention
    cd SageAttention
    
  6. Install the library

    pip install -e . --no-build-isolation
    
    • If you installed uv, use uv pip install instead
    • The installation takes a few minutes

Install older PyTorch

Expand
  1. Navigate to the WebUI directory
  2. Edit the webui-user.bat file
  3. Add a new line to specify an older version:
set TORCH_COMMAND=pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121

Attention

The --xformers and --sage args are only responsible for installing the packages, not whether its respective attention is used (this also means you can remove them once the packages are successfully installed)

Forge Classic tries to import the packages and automatically choose the first available attention function in the following order:

  1. SageAttention
  2. FlashAttention
  3. xformers
  4. PyTorch
  5. Basic

To skip a specific attention, add the respective disable arg such as --disable-sage

The VAE only checks for xformers, so --xformers is still recommended even if you already have --sage

In my experience, the speed of each attention function for SDXL is ranked in the following order:

  • SageAttentionFlashAttention > xformers > PyTorch >> Basic

SageAttention is based on quantization, so its quality might be slightly worse than others


Issues & Requests

  • Issues about removed features will simply be ignored
  • Issues regarding installation will be ignored if it's obviously user-error
  • Feature Request not related to performance or optimization will simply be ignored
    • For cutting edge features, check out reForge instead
    • Non-Windows platforms will not be supported, as I cannot verify nor maintain them

Special thanks to AUTOMATIC1111, lllyasviel, and comfyanonymous, kijai,
along with the rest of the contributors,
for their invaluable efforts in the open-source image generation community

Downloads last month
214