language:
- en
base_model:
- Minthy/RouWei-0.7
pipeline_tag: text-to-image
library_name: diffusers
tags:
- anime
In depth retraining of Illustrious to achieve best prompt adherence, knowledge and state of the art performance.
Dataset of 13M unique pictures (~4M with natural text captions) picked and balanced from over 25M of anime art, covers, digital illustrations, western media and other sources, including private datasets. More detailed description on Civitai
Vpred version
Will be soon
Key advantages:
- Fresh and wast knowledge about characters, concepts, styles, cultural and related things
- The best prompt adherence among SDXL anime models at the moment of release
- Solved main problems with tags bleeding and biases, common for Illustrious, NoobAi and other checkpoints
- Excellent aesthetics and knowledge across a wide range of styles (over 50,000 artists, including hundreds of unique cherry-picked datasets from private galleries, including those received from the artists themselves)
- High flexibility and variety without stability tradeoff
- No more annoying watermarks for popular styles thanks to clean dataset
- Vibrant colors and smooth gradients without trace of burning, full range even with epsilon
- Pure training from Illustrious v0.1 without involving third-party checkpoints, Loras, tweakers, etc.
Dataset cut-off - end if April 2025.
Features and prompting:
Important change:
When you are prompting artist styles, especially mixing several, their tags MUST BE in a separate CLIP chunk. Add BREAK
after it (for A1111 and derivatives), use conditioning concat node (for Comfy) or at least put them in the very end. Otherwise, significant degradation of results is likely.
The model is designed to work both with short booru tag-based and long complex natural text prompts. The best result can be achieved using the combination of tags and some natural text phrases. For tags classic danbooru-style comma-separated tags without underscores were used.
Basic settings:
~1..1.5 megapixel for txt2img, any AR with resolution multiple of 64 (1024x1024, 1152x, 1216x832,...). Euler_a, CFG 4..8 for epsilon/3..5 for vpred, 20..28steps. LCM/PCM/DMD untested, cfg++ samplers work fine, some shedulers not working. Highresfix: x1.5 latent + denoise 0.6 or any gan + denoise 0.3..0.55.
Please note that vpred version requires a lower CFG value.
Examples can be found in repo, more on civitai.
Quality tags:
There are only 4:
masterpiece, best quality
for positive and
low quality, worst quality
for negative
Nothing else. All except low quality
in negative can be ommited.
Meta tags like lowres have been removed, do not use them. Low resolution images have been either removed or upscaled and cleaned with DAT depending on their importance
Negative prompt:
worst quality, low quality, watermark
For best results keep it as clean as possible. Spamming of popular sequences will not improve results, since all related flaws have been solved, but will only lead to unwanted effects, biases and poor quality.
Artist styles:
The model knows over 35k of artist styles. List, grids with example on Mega. Used with by
, will not work properly without it.
General styles:
2.5d, anime screencap, bold line, sketch, cgi, digital painting, flat colors, smooth shading, minimalistic, ink style, oil style, pastel style
Natural text
Use it in combination with booru tags, works great. Use only natural text after typing styles and quality tags. Use just booru tags and forget about it, it's all up to you. About 4M of pitures from dataset have hybrid natural-text captions made by Claude, GPT, Gemini and ToriiGate Version 0.8 comes with advanced understanding of natural text prompts, providing state of the art performance among SDXL anime models. It doesn't mean that you are obligated to use nl prompts, tags only - completely fine, especially because understanding of tags combinations is also improved.
Brightness/colors/contrast:
You can use extra meta tags to control it:
low brightness, high brightness, low saturation, high saturation, low gamma, high gamma, sharp colors, soft colors, hdr, sdr
Vpred version:
Vpred version for RouWei-0.8 will come soon.
Base model and Float version
You can use FP32 version for more accurate merging, or to get some benefits from using text encoders in fp32 mode with Comfy. Epsilon and vpred versions here have a brief aesthetic polishing after main training to improve small details and coherence. If you want to use RouWei in merges, extract or finetune it without bringing that last things - you can use BASE VERSION of RouWei: FP16 FP32
Discord server
Safety:
Model tends to generate NSFW images for corresponding prompts, consider to add extra filtering. Outputs may be inacurate and provocative and must not be used as a reference.
License:
Same as illustrious, please check out original page for limitation. Fell free to use in your merges, finetunes, ets. just please leave a link.
Thanks:
A number of anonymous persons, Bakariso, dga, Fi., ello, K., LOL2024, NeuroSenko, rred, Soviet Cat, Sv1., T., TekeshiX and other fellow brothers that helped.
Donations:
BTC bc1qwv83ggq8rvv07uk6dv4njs0j3yygj3aax4wg6c
ETH/USDT(e) 0x04C8a749F49aE8a56CB84cF0C99CD9E92eDB17db
XMR 47F7JAyKP8tMBtzwxpoZsUVB8wzg2VrbtDKBice9FAS1FikbHEXXPof4PAb42CQ5ch8p8Hs4RvJuzPHDtaVSdQzD6ZbA5TZ