Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
smpanaro
/
gpt2-large-AutoGPTQ-4bit-128g
like
0
Text Generation
Transformers
wikitext
gpt2
4-bit precision
gptq
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
gpt2-large-AutoGPTQ-4bit-128g
630 MB
1 contributor
History:
4 commits
smpanaro
Update README.md
aa5636f
verified
almost 2 years ago
.gitattributes
1.52 kB
initial commit
almost 2 years ago
README.md
1.17 kB
Update README.md
almost 2 years ago
config.json
1.27 kB
Upload of AutoGPTQ quantized model
almost 2 years ago
gptq_model-4bit-128g.safetensors
630 MB
xet
Upload of AutoGPTQ quantized model
almost 2 years ago
quantize_config.json
302 Bytes
Upload of AutoGPTQ quantized model
almost 2 years ago