MoD Experimental !
Collection
Mixture of Domains
•
4 items
•
Updated
Panacea-MegaScience-Qwen3-1.7B is a high-efficiency, multi-domain model fine-tuned on Qwen3-1.7B using the MegaScience/MegaScience dataset. MegaScience exhibits greater effectiveness for larger and stronger models, suggesting a scaling benefit for scientific instruction tuning. This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.
| File Name | Quant Type | File Size |
|---|---|---|
| Panacea-MegaScience-Qwen3-1.7B.BF16.gguf | BF16 | 3.45 GB |
| Panacea-MegaScience-Qwen3-1.7B.F16.gguf | F16 | 3.45 GB |
| Panacea-MegaScience-Qwen3-1.7B.F32.gguf | F32 | 6.89 GB |
| Panacea-MegaScience-Qwen3-1.7B.Q2_K.gguf | Q2_K | 778 MB |
| Panacea-MegaScience-Qwen3-1.7B.Q3_K_L.gguf | Q3_K_L | 1 GB |
| Panacea-MegaScience-Qwen3-1.7B.Q3_K_M.gguf | Q3_K_M | 940 MB |
| Panacea-MegaScience-Qwen3-1.7B.Q3_K_S.gguf | Q3_K_S | 867 MB |
| Panacea-MegaScience-Qwen3-1.7B.Q4_K_M.gguf | Q4_K_M | 1.11 GB |
| Panacea-MegaScience-Qwen3-1.7B.Q4_K_S.gguf | Q4_K_S | 1.06 GB |
| Panacea-MegaScience-Qwen3-1.7B.Q5_K_M.gguf | Q5_K_M | 1.26 GB |
| Panacea-MegaScience-Qwen3-1.7B.Q5_K_S.gguf | Q5_K_S | 1.23 GB |
| Panacea-MegaScience-Qwen3-1.7B.Q6_K.gguf | Q6_K | 1.42 GB |
| Panacea-MegaScience-Qwen3-1.7B.Q8_0.gguf | Q8_0 | 1.83 GB |
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Base model
Qwen/Qwen3-1.7B-Base