|
--- |
|
base_model: |
|
- BeaverAI/MN-2407-DSK-QwQify-v0.1-12B |
|
- Nitral-AI/Mag-Mell-Reasoner-12B |
|
- OddTheGreat/Sinner_12B_V.1 |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using [OddTheGreat/Sinner_12B_V.1](https://huggingface.co/OddTheGreat/Sinner_12B_V.1) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [BeaverAI/MN-2407-DSK-QwQify-v0.1-12B](https://huggingface.co/BeaverAI/MN-2407-DSK-QwQify-v0.1-12B) |
|
* [Nitral-AI/Mag-Mell-Reasoner-12B](https://huggingface.co/Nitral-AI/Mag-Mell-Reasoner-12B) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: OddTheGreat/Sinner_12B_V.1 |
|
parameters: |
|
weight: [0.9, 1, 0.8, 0.65, 0.2, 0.05, 0.1, 0.35, 0.65, 0.8] |
|
- model: BeaverAI/MN-2407-DSK-QwQify-v0.1-12B |
|
parameters: |
|
weight: [0.1, 10e-5, 10e-5, 0.1, 0.5, 0.5, 0.5, 0.35, 0.1, 10e-5] |
|
density: [0.35, 1, 1, 0.35, 0.35, 0.5, 0.5, 0.6, 0.9, 1] |
|
gamma: [5e-4, 5e-3, 5e-3, 5e-4, 5e-4, 0.01, 0.01, 0.01, 5e-3, 5e-4] |
|
- model: Nitral-AI/Mag-Mell-Reasoner-12B |
|
parameters: |
|
weight: [10e-5, 10e-5, 0.2, 0.25, 0.3, 0.45, 0.4, 0.3, 0.25, 0.2] |
|
density: [1, 1, 0.9, 0.9, 0.85, 0.85, 0.8, 0.85, 0.9, 0.9] |
|
gamma: 0.03 |
|
merge_method: breadcrumbs |
|
base_model: OddTheGreat/Sinner_12B_V.1 |
|
parameters: |
|
int8_mask: false |
|
rescale: true |
|
normalize: false |
|
dtype: float32 |
|
out_dtype: bfloat16 |
|
tokenizer_source: BeaverAI/MN-2407-DSK-QwQify-v0.1-12B |
|
``` |
|
|