ParoQuant
Collection
Pairwise Rotation Quantization for Efficient Reasoning LLM Inference • 15 items • Updated
• 7
Pairwise Rotation Quantization for Efficient Reasoning LLM Inference
ParoQuant is the state-of-the-art INT4 quantization for LLMs. It closes the accuracy gap with FP16 while running at near-AWQ speed. Supports NVIDIA GPUs (vLLM, Transformers) and Apple Silicon (MLX).
z-lab/Qwen3-1.7B-PARO is a 4-bit Qwen/Qwen3-1.7B quantized with ParoQuant. Check out other ParoQuant models from the Hugging Face collection. Swap the model name in the commands below to try any of them.
# NVIDIA GPU
pip install "paroquant[vllm]"
# Apple Silicon
pip install "paroquant[mlx]"
python -m paroquant.cli.chat --model z-lab/Qwen3-1.7B-PARO
python -m paroquant.cli.serve --model z-lab/Qwen3-1.7B-PARO --port 8000
# Interactive chat
docker run --pull=always --rm -it --gpus all --ipc=host \
ghcr.io/z-lab/paroquant:chat --model z-lab/Qwen3-1.7B-PARO
# API server (port 8000)
docker run --pull=always --rm -it --gpus all --ipc=host -p 8000:8000 \
ghcr.io/z-lab/paroquant:serve --model z-lab/Qwen3-1.7B-PARO
@inproceedings{liang2026paroquant,
title = {{ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM Inference}},
author = {Liang, Yesheng and Chen, Haisheng and Zhang, Zihan and Han, Song and Liu, Zhijian},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2026}
}