zai-org__GLM-Z1-9B-0414_RTN_w4g128
This is a 4-bit RTN (Round-To-Nearest) quantized version of zai-org/GLM-Z1-9B-0414.
Quantization Details
- Method: RTN (Round-To-Nearest)
- Bits: 4-bit
- Group Size: 128
- Base Model: zai-org/GLM-Z1-9B-0414
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "quantpa/zai-org__GLM-Z1-9B-0414_RTN_w4g128"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Use the model for inference
Model Details
- Quantization: RTN 4-bit
- Original Model: zai-org/GLM-Z1-9B-0414
- Quantized by: quantpa
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for quantpa/zai-org__GLM-Z1-9B-0414_RTN_w4g128
Base model
zai-org/GLM-Z1-9B-0414