--- base_model: - Qwen/Qwen3-14B - ValiantLabs/Qwen3-14B-Esper3 - soob3123/GrayLine-Qwen3-14B library_name: transformers license: apache-2.0 language: - en pipeline_tag: text-generation tags: - mergekit - merge - esper - esper-3 - grayline - valiant - valiant-labs - qwen - qwen-3 - qwen-3-14b - 14b - reasoning - code - code-instruct - python - javascript - dev-ops - jenkins - terraform - scripting - powershell - azure - aws - gcp - cloud - problem-solving - architect - engineer - developer - creative - analytical - expert - rationality - uncensored - unfiltered - amoral-ai - conversational - chat - instruct --- # sequelbox/Qwen3-14B-Esper3Grayline This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit), combining Esper 3 14b's specialty skills with Grayline 14b's uncensored reasoning. ## Merge Details ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) as a base. ### Models Merged The following models were included in the merge: * [ValiantLabs/Qwen3-14B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-14B-Esper3) * [soob3123/GrayLine-Qwen3-14B](https://huggingface.co/soob3123/GrayLine-Qwen3-14B) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: della dtype: bfloat16 parameters: normalize: true models: - model: ValiantLabs/Qwen3-14B-Esper3 parameters: density: 0.5 weight: 0.3 - model: soob3123/GrayLine-Qwen3-14B parameters: density: 0.5 weight: 0.25 base_model: Qwen/Qwen3-14B ```