Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear DARE merge method using unsloth/Llama-3.2-3B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: unsloth/Llama-3.2-3B
- model: SicariusSicariiStuff/Impish_LLAMA_3B
parameters:
weight: 1
- model: djuna/ReWiz-Llama-3.2-3B-fix-config
parameters:
weight: 1
merge_method: dare_linear
base_model: unsloth/Llama-3.2-3B
tokenizer_source: djuna/ReWiz-Llama-3.2-3B-fix-config
parameters:
normalize: true
int8_mask: true
dtype: float32
name: rewish
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 22.45 |
| IFEval (0-Shot) | 63.68 |
| BBH (3-Shot) | 22.07 |
| MATH Lvl 5 (4-Shot) | 12.92 |
| GPQA (0-shot) | 4.47 |
| MuSR (0-shot) | 7.92 |
| MMLU-PRO (5-shot) | 23.62 |