Quant(s) by FrenzyBiscuit
Ready.Art
community
Verified
AI & ML interests
https://discord.gg/Qrw6gcRtjc
Recent Activity
View all activity
Organization Card
Ready.Art
Specializing in uncensored LLM fine-tuning & quantization

π Featured Model
L3.3-The-Omega-Directive-70B-Unslop-v2.0
- π§ Optimized for extreme roleplay scenarios
- β‘ Unfiltered narrative generation
π’ Quantization Services
We provide optimized quants for all our models:
βοΈ Recommended Settings
For 70B Models:
Inception PresetsFor 24B models:
Mistral-V7-Tekken-T8-XML PresetFor 12B models:
Mistral-V3-Tekken-T8-XML Presetπ§βπ¬ POC for help
- FrenzyBiscuit (Project Lead & Quant Support)
- ToastyPigeon (Support Lead)
Need help? Join our Discord for support! (or just to chill with us)
Some of our quants now use branches. Especially newer quants. To download a branch, you need to find the branch you want and then do huggingface-cli download ReadyArt/model --local-dir . --revision "5.0_H6" (where 5.0_H6 is the branch)
models
1,126

ReadyArt/Steelskull_L3.3-Shakudo-70b-EXL2
Updated
β’
26

ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1
Text Generation
β’
71B
β’
Updated
β’
200
β’
8

ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1_EXL3
Text Generation
β’
Updated
β’
19

ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1_EXL2
Text Generation
β’
Updated
β’
25

ReadyArt/C4-Broken-Tutu-24B
Text Generation
β’
24B
β’
Updated
β’
52
β’
5

ReadyArt/R1-Broken-Tutu-24B
Text Generation
β’
24B
β’
Updated
β’
28
β’
3

ReadyArt/Tarek07_Legion-V2.1-LLaMa-70B-EXL2
Updated
β’
11

ReadyArt/Darkhn-L3-3-70B-Animus-V7.0-EXL2
Updated
β’
14

ReadyArt/Not-WizardLM-2-The-Omega-Directive-7b-Unslop-v2.1
Text Generation
β’
7B
β’
Updated
β’
57
β’
5

ReadyArt/Darkhn-M3.2-24B-Animus-V7.1-EXL2
Updated
β’
10
datasets
0
None public yet