vLLM serve for Qwen3-Omni currently only supports the thinker model.
#8 opened about 1 month ago
by
zhnagchenchne
How much vram?
#7 opened 2 months ago
by
yiki12
🚀 Best Practices for Evaluating the Qwen3-Omni Model
#5 opened 2 months ago
by
Yunxz
GUFF量化版本已發佈(INT8、FP16)
#4 opened 2 months ago
by
vito95311
量化版本已發佈(INT8+FP16)
#3 opened 2 months ago
by
vito95311
Local Installation Video and Testing - Step by Step
#1 opened 2 months ago
by
fahdmirzac