Mixtral-8x22B-Instruct-v0.1-NVFP4
NVFP4-quantized version of mistralai/Mixtral-8x22B-Instruct-v0.1, produced by Enfuse.
Model Overview
| Attribute | Value |
|---|---|
| Base Model | mistralai/Mixtral-8x22B-Instruct-v0.1 |
| Total Parameters | 141B (Mixture-of-Experts) |
| Active Parameters | ~39B (top-2 of 8 experts per token) |
| Architecture | Sparse MoE Transformer |
| Quantization | NVFP4 (W4A4 with FP4 weights and dynamic FP4 activations) |
| Format | compressed-tensors (safetensors) |
| Precision | FP4 weights (group_size=16), FP8 scales, lm_head unquantized |
| Approx. Size | ~75 GB (down from ~282 GB in BF16) |
| Context Length | 65,536 tokens |
| License | Apache 2.0 |
Why NVFP4 Matters for MoE
Mixture-of-Experts models store all expert weights in memory even though only a subset are active per token. Mixtral-8x22B has 141B total parameters but only uses ~39B per forward pass. This means memory -- not compute -- is the primary bottleneck. NVFP4 quantization reduces the memory footprint from ~282 GB to ~70 GB, making this model deployable on significantly fewer GPUs without sacrificing the MoE architecture's efficiency advantages.
How to Use
vLLM (recommended)
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "enfuse/Mixtral-8x22B-Instruct-v0.1-NVFP4"
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=2)
sampling_params = SamplingParams(temperature=0.7, top_p=0.9, max_tokens=512)
messages = [
{"role": "user", "content": "Explain the benefits of mixture-of-experts architectures."},
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)
Hardware Requirements
- Full NVFP4 (W4A4): Requires NVIDIA Blackwell GPU (B200, GB200, RTX 5090) for native FP4 tensor core support
- Weight-only FP4: Older GPUs (H100, A100) can load the model but will only apply weight quantization
- Recommended: 2x B200 with tensor parallelism
Quantization Details
Quantized using LLM Compressor (v0.10.0):
- Method: Post-training quantization (PTQ) with calibration
- Calibration data: 512 samples from HuggingFaceH4/ultrachat_200k
- Sequence length: 2048 tokens
- Scheme:
NVFP4 - Excluded layers:
lm_head, MoE gate layers (block_sparse_moe.gate)
Infrastructure
Quantized on an NVIDIA DGX B200 (8x B200, 2 TiB RAM, CUDA 13.0).
Evaluation
OpenLLM v1 Benchmarks
Evaluated using lm-evaluation-harness (v0.4.11) with vLLM backend, --apply_chat_template --fewshot_as_multiturn, tensor_parallel_size=2 on NVIDIA B200 GPUs.
| Benchmark | Metric | n-shot | NVFP4 | BF16 Reference | Recovery |
|---|---|---|---|---|---|
| ARC-Challenge | acc_norm | 25 | 59.30 | 72.7 | 81.6% |
| GSM8K | exact_match | 5 | 70.13 | 82.0 | 85.5% |
| HellaSwag | acc_norm | 10 | 81.00 | 89.1 | 90.9% |
| MMLU | acc | 5 | 68.80 | 77.8 | 88.4% |
| TruthfulQA MC2 | acc | 0 | 62.52 | 68.1 | 91.8% |
| Winogrande | acc | 5 | 76.09 | 85.2 | 89.3% |
BF16 reference scores from Open LLM Leaderboard v1. Average recovery: ~88%.
About Enfuse
Enfuse builds sovereign AI infrastructure for regulated enterprises. The Enfuse platform provides on-prem LLM orchestration and an App Factory for shipping governed, compliant AI applications on your own infrastructure.
This quantization is part of our ongoing work to make large language models more accessible and efficient for on-premise deployment, where memory efficiency directly impacts what models organizations can run within their own data centers.
Acknowledgments
- Mistral AI for the Mixtral-8x22B-Instruct model
- vLLM Project for LLM Compressor
- NVIDIA for the NVFP4 format and Blackwell hardware
- Downloads last month
- 31
Model tree for enfuse/Mixtral-8x22B-Instruct-v0.1-NVFP4
Base model
mistralai/Mixtral-8x22B-v0.1