Inference Providers
Active filters: GPTQ
DanielAWrightGabrielAI/pygmalion-7b-4bit-128g-cuda-2048Token
Text Generation
• Updated • 8
• 15
Text Generation
• Updated • 7
• 1
CalderaAI/13B-Ouroboros-GPTQ4bit-128g-CUDA
Text Generation
• Updated • 4
daedalus314/Griffin-3B-GPTQ
Text Generation
• 3B • Updated • 4
Text Generation
• Updated • 2
daedalus314/Marx-3B-V2-GPTQ
Text Generation
• Updated • 5
TKDKid1000/pythia-2.8b-deduped-GPTQ
Text Generation
• Updated • 3
Trelis/Yi-34B-200K-Llamafied-chat-SFT-function-calling-v2-GPTQ
Text Generation
• Updated Text Generation
• Updated • 6
• 1
Inferless/SOLAR-10.7B-Instruct-v1.0-GPTQ
Text Generation
• Updated • 6
• 2
Inferless/Mixtral-8x7B-v0.1-int8-GPTQ
Text Generation
• Updated • 1
• 2
Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ
Text Generation
• Updated • 1
• 4
bigquant/Senku-70B-GPTQ-4bit
Text Generation
• Updated • 2
• 1
twhoool02/Llama-2-7b-hf-AutoGPTQ
Text Generation
• 7B • Updated • 3
Dmitriy007/rugpt2_gen_news-gptq-4bit
Text Generation
• 0.1B • Updated • 4
SwastikM/Llama-2-7B-Chat-text2code
Text Generation
• Updated • 10
• 4
adriabama06/Llama-3.2-1B-Instruct-GPTQ-8bit-128g
Text Generation
• 1B • Updated • 3
• 1
NightForger/saiga_nemo_12b-GPTQ
Text Generation
• Updated • 14
NaomiBTW/L3-8B-Lunaris-v1-GPTQ
Text Generation
• Updated GusPuffy/Llama-3.1-70B-ArliAI-RPMax-v1.3-GPTQ
11B • Updated iSolver-AI/test123-quantized.w4a16
Image-Text-to-Text
• Updated • 7
AXERA-TECH/DeepSeek-R1-Distill-Qwen-1.5B-GPTQ-Int4
Updated • 10
• 1
AXERA-TECH/DeepSeek-R1-Distill-Qwen-7B-GPTQ-Int4
Updated • 12
• 1
AXERA-TECH/Qwen2.5-1.5B-Instruct-GPTQ-Int4
Text Generation
• Updated • 92
AXERA-TECH/Qwen2.5-3B-Instruct-GPTQ-Int4
AXERA-TECH/Qwen2.5-0.5B-Instruct-GPTQ-Int4
Text Generation
• Updated • 4
AXERA-TECH/Qwen2.5-7B-Instruct-GPTQ-Int4
Updated
RedHatAI/DeepSeek-R1-quantized.w4a16
Text Generation
• 104B • Updated • 50
• 7
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
• 0.6B • Updated • 143
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
• 0.6B • Updated • 16