L3.3-oiiaioiiai-A2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using ReadyArt/The-Omega-Directive-L-70B-v1.0 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

##############################################################################
# The benefit of L3 models is that all subversions are mergable in some way.
# So we can create something **REALLY REALLY REALLY** Stupid like this.
##############################################################################
# PLEASE DO NOT FOLLOW.
# This will probably show up on the hf repo. Hi there!
##############################################################################
# - KaraKaraWitch.
# P.S. 3e7aWKeGHFE (15/04/25)
##############################################################################

models:
  # Ahh mirai. It's a old model, but it's made by Coffee Vampire.
  - model: Blackroot/Mirai-3.0-70B
    parameters:
      density: 0.2
      weight: 0.5
  # Anyone still remember when claude-isms is a still a thing? KaraKaraWitch remembers.
  - model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
    parameters:
      density: 0.3
      weight: 0.5
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
    parameters:
      density: 0.75
      weight: 0.5
  # Drummer models are drummer models.
  - model: TheDrummer/Anubis-70B-v1
    parameters:
      density: 0.351
      weight: 0.751
  # Sao10k
  - model: Sao10K/L3.3-70B-Euryale-v2.3
    parameters:
      density: 0.420
      weight: 0.679
  - model: Sao10K/70B-L3.3-Cirrus-x1
    parameters:
      density: 0.43
      weight: 0.3
  - model: Sao10K/70B-L3.3-mhnnn-x1
    parameters:
      density: 0.23
      weight: 0.11
  # Weebs represent.
  - model: nitky/Llama-3.3-SuperSwallowX-70B-Instruct-v0.1
    parameters:
      density: 0.25
      weight: 0.2
  # I still have no idea what Undi was planning for this
  - model: Undi95/Sushi-v1.4
    parameters:
      density: 0.1457
      weight: 0.69
  # ?
  - model: pankajmathur/orca_mini_v9_3_70B
    parameters:
      density: 0.2
      weight: 0.2
  # i shall also refuse to credit cognitive computations.
  - model: Fizzarolli/L3.1-70b-glitz-v0.2
    parameters:
      density: 0.34
      weight: 0.2
  # more drummer, but this is R1...
  - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
    parameters:
      density: 0.04179
      weight: 0.1696
  - model: flammenai/Llama3.1-Flammades-70B
    parameters:
      density: 0.7234
      weight: 0.1718
  # - model: ReadyArt/The-Omega-Directive-L-70B-v1.0
  #   parameters:
  #     density: 0.2437
  #     weight: 0.3198
  # - model: Black-Ink-Guild/Pernicious_Prophecy_70B
  #   parameters:
  #     density: 0.8129
  #     weight: 0.3378
  # # De-alignment
  # - model: PKU-Alignment/alpaca-70b-reproduced-llama-3
  #   parameters:
  #     density: 0.7909
  #     weight: 0.672
  # # Text Adventure
  # - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
  #   parameters:
  #     density: 0.5435
  #     weight: 0.7619
  # - model: KaraKaraWitch/Llama-3.3-Amakuro
  #   parameters:
  #     density: 0.37
  #     weight: 0.359
  # - model: ReadyArt/Forgotten-Safeword-70B-v5.0
  #   parameters:
  #     density: 0.37
  #     weight: 0.359
  # - model: Undi95/Sushi-v1.4
  #   parameters:
  #     density: 0.623
  #     weight: 0.789
  # - model: sophosympatheia/Nova-Tempus-70B-v0.1
  #   parameters:
  #     density: 0.344
  #     weight: 0.6382
  # - model: flammenai/Mahou-1.5-llama3.1-70B
  #   parameters:
  #     density: 0.56490
  #     weight: 0.4597
  # # Changelog: [ADDED] furries.
  # - model: Mawdistical/Draconic-Tease-70B
  #   parameters:
  #     density: 0.4706
  #     weight: 0.3697
  # # R1 causes a lot of alignment. So we avoid it.
  # - model: Steelskull/L3.3-Electra-R1-70b
  #   parameters:
  #     density: 0.1692
  #     weight: 0.1692
  # # Blue hair, blue tie... Hiding in your wiifii
  # # - model: sophosympatheia/Midnight-Miqu-70B-v1.0
  # #   parameters:
  # #     density: 0.4706
  # #     weight: 0.3697
  # # OpenBioLLM does not use safetensors in the repo. Custom safetensors version.
  # - model: OpenBioLLM
  #   parameters:
  #     density: 0.267
  #     weight: 0.1817
  # - model: allura-org/Bigger-Body-70b
  #   parameters:
  #     density: 0.6751
  #     weight: 0.3722
  # - model: nbeerbower/Llama3-Asobi-70B
  #   parameters:
  #     density: 0.7113
  #     weight: 0.4706
  # # ...Reminds that anytime that I try and merge in SEALION models
  # # it ends up overpowering other models. So I'm setting it *really* low.
  # - model: aisingapore/Llama-SEA-LION-v3-70B-IT
  #   parameters:
  #     density: 0.0527
  #     weight: 0.1193


merge_method: ties
base_model: ReadyArt/The-Omega-Directive-L-70B-v1.0
parameters:
  select_topk: 0.50
dtype: bfloat16
Downloads last month
71
Safetensors
Model size
70.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for KaraKaraWitch/oiiaioiiai-A