Qwen3.5-0.8B

Qwen Chat

This repository contains model weights and configuration files for the post-trained model in the ONNX format.

These artifacts are compatible with Hugging Face Transformers,js, ONNX Runtime, etc.

In light of its parameter scale, the intended use cases are prototyping, task-specific fine-tuning, and other research or development purposes.

Over recent months, we have intensified our focus on developing foundation models that deliver exceptional utility and performance. Qwen3.5 represents a significant leap forward, integrating breakthroughs in multimodal learning, architectural efficiency, reinforcement learning scale, and global accessibility to empower developers and enterprises with unprecedented capability and efficiency.

Qwen3.5 Highlights

Qwen3.5 features the following enhancement:

  • Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.

  • Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.

  • Scalable RL Generalization: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability.

  • Global Linguistic Coverage: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding.

  • Next-Generation Training Infrastructure: Near-100% multimodal training efficiency compared to text-only training and asynchronous RL frameworks supporting massive-scale agent scaffolds and environment orchestration.

For more details, please refer to our blog post Qwen3.5.

Model Overview

  • Type: Causal Language Model with Vision Encoder
  • Training Stage: Pre-training & Post-training
  • Language Model
    • Number of Parameters: 0.8B
    • Hidden Dimension: 1024
    • Token Embedding: 248320 (Padded)
    • Number of Layers: 24
    • Hidden Layout: 6 Γ— (3 Γ— (Gated DeltaNet β†’ FFN) β†’ 1 Γ— (Gated Attention β†’ FFN))
    • Gated DeltaNet:
      • Number of Linear Attention Heads: 16 for V and 16 for QK
      • Head Dimension: 128
    • Gated Attention:
      • Number of Attention Heads: 8 for Q and 2 for KV
      • Head Dimension: 256
      • Rotary Position Embedding Dimension: 64
    • Feed Forward Network:
      • Intermediate Dimension: 3584
    • LM Output: 248320 (Tied to token embedding)
    • MTP: trained with multi-steps
  • Context Length: 262,144 natively

Benchmark Results

Language

Qwen3-4B-2507Qwen3-1.7BQwen3.5-2BQwen3.5-0.8B
Non-Thinking Mode
MMLU-Pro 69.6 40.2 55.3 29.7
MMLU-Redux 84.2 64.4 69.2 48.5
C-Eval 80.2 61.0 65.2 46.4
SuperGPQA 42.8 21.0 30.4 16.9
IFEval 83.4 68.2 61.2 52.1
MMMLU 64.9 46.7 56.9 34.1
Knowledge & STEM (Thinking)
MMLU-Pro 74.0 56.5 66.5 42.3
MMLU-Redux 86.1 73.9 79.6 59.5
C-Eval 82.2 68.1 73.2 50.5
SuperGPQA 47.8 31.2 37.5 21.3
GPQA 65.8 40.1 51.6 11.9
Instruction Following (ThinkingοΌ‰
IFEval 87.4 72.5 78.6 44.0
IFBench 50.4 26.7 41.3 21.0
MultiChallenge 41.7 27.2 33.7 18.9
Long Context (ThinkingοΌ‰
AA-LCR 32.0 6.7 25.6 4.7
LongBench v2 42.8 26.5 38.7 26.1
Reasoning (ThinkingοΌ‰
HMMT Feb 25 57.5 10.2 22.9 --
HMMT Nov 25 69.6 8.9 19.6 --
General Agent (ThinkingοΌ‰
BFCL-V4 39.9 -- 43.6 25.3
TAU2-Bench 43.2 -- 48.8 11.6
Multilingualism (ThinkingοΌ‰
MMMLU 70.8 57.0 63.1 44.3
MMLU-ProX 62.4 49.4 52.3 34.6
NOVA-63 47.1 40.3 46.4 42.4
INCLUDE 64.4 51.8 55.4 40.6
Global PIQA 73.5 63.1 69.3 59.4
PolyMATH 46.2 25.2 26.1 8.2
WMT24++ 58.9 39.3 45.8 27.2
MAXIFE 72.1 50.7 60.6 39.2

* TAU2-Bench: we follow the official setup except for the airline domain, where all models are evaluated by applying the fixes proposed in the Claude Opus 4.5 system card.
* MMLU-ProX: we report the averaged accuracy on 29 languages.
* WMT24++: a harder subset of WMT24 after difficulty labeling and rebalancing; we report the averaged scores on 55 languages using XCOMET-XXL.
* MAXIFE: we report the accuracy on English + multilingual original prompts (totally 23 settings).
* Experimental settings: top_p=0.95, top_k=20, presence_penalty=1.5, and temperature=1.0 were used.
* Empty cells (--) indicate scores not yet available or not applicable.

Vision Language

Qwen3-VL-4BQwen3-VL-2BQwen3.5-2BQwen3.5-0.8B
STEM and Puzzle
MMMU 70.8 61.4 64.2/64.2 49/47.4
MMMU-Pro 57.0 42.5 50.3/47.7 31.2/31.4
Mathvista(mini) 79.5 73.6 76.7/73.9 62.2/58.6
DynaMath 74.4 66.7 73.6/69.6 49.9/46.5
ZEROBench 0.0 0.0 1.0/0.0 0.0/0.0
ZEROBench_sub 18.9 13.2 17.1/18.6 12.9/11.4
VlmsAreBlind 68.6 50.0 75.8/74.3 59.4/57.3
General VQA
RealWorldQA 73.2 69.5 74.5/71.2 63.4/61.6
MMStar 73.2 68.1 71.7/68.0 58.3/55.9
MMBenchEN-DEV-v1.1 86.7 81.9 83.3/81.3 69.9/68.0
SimpleVQA 48.8 43.6 38.5/39.5 31.3/30.4
HallusionBench 64.1 54.9 58.0/51.3 53.1/46.7
Text Recognition and Document Understanding
MMLongBench-Doc 44.4 33.8 45.4/38.8 33.6/28.1
AI2D_TEST 84.9 80.4 83.3/81.5 69.9/68.7
CC-OCR 73.8 68.3 72.9/75.8 63.2/66.7
OmniDocBench1.5 80.0 65.9 79.8/80.9 61.0/70.6
CharXiv(RQ) 50.3 37.1 58.8/52.6 41.3/38.2
OCRBench 80.8 79.2 84.5/85.4 74.5/79.1
Spatial Intelligence
RefCOCO(avg) 88.2 84.8 84.8/84.3 79.3/77.8
CountBench 89.4 84.1 91.4/86.8 77.0/68.6
ODInW13 39.4 36.0 35.9/40.5 31.6/33.2
ERQA 47.3 41.8 43.8/33.0 34.5/23.8
EmbSpatialBench 80.7 75.9 77.9/66.4 68.6/54.6
RefSpatialBench 45.3 28.9 32.9/30.0 23.5/21.7
Hypersim 11.9 11.2 12.4/12.4 11.9/11.0
SUNRGBD 28.0 28.6 28.7/25.6 26.1/23.3
Nuscene 4.9 4.0 6.9/8.5 5.7/7.0
Video Understanding
VideoMME(w sub.) 76.0 67.9 75.6/-- 63.8/--
VideoMME(w/o sub.) 68.9 62.1 69.0/-- 57.7/--
VideoMMMU 69.4 54.1 62.1/-- 44.3/--
MLVU 75.7 69.2 76.2/-- 65.6/--
MVBench 69.3 64.5 64.9/-- 55.8/--
LVBench 53.5 47.6 57.1/-- 45.1/--
MMVU 58.6 48.9 48.6/-- 34.3/--
Visual Agent
ScreenSpot Pro 59.5 48.5 --/54.5 --/46.5
Medical VQA
SLAKE 65.9 61.1 74.4/67.5 62.6/59.5
PMC-VQA 48.4 42.4 48.8/54.0 40.4/45.5
MedXpertQA-MM 26.3 13.0 26.9/19.1 17.1/25.3

* Scores of Qwen3.5 models are reported as Thinking / Non-thinking.
* MathVision: our model’s score is evaluated using a fixed prompt, e.g., β€œPlease reason step by step, and put your final answer within \boxed{}.” For other models, we report the higher score between runs with and without the \boxed{} formatting.
* Experimental settings: For the Video benchmarks, we used top_p=0.95, top_k=20, presence_penalty=1.5, and temperature=1.0. All other benchmarks adopted the same sampling configuration but with temperature=0.6 under the thinking mode. Under the non-thinking mode, the sampling parameters were set to top_p=0.8, top_k=20, presence_penalty=1.5, and temperature=0.7.
* Empty cells (--) indicate scores not yet available or not applicable.

Quickstart

Qwen3.5 models support both non-thinking and thinking mode. Qwen3.5-0.8B operates in non-thinking mode by default. To enable thinking, refer to the examples here.

For streamlined integration, we recommend using Qwen3.5 via APIs. Below is a guide to use Qwen3.5 via OpenAI-compatible API.

Transformers.js

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @huggingface/transformers@next

You can then use the model like this:

import {
  AutoProcessor,
  Qwen3_5ForConditionalGeneration,
  RawImage,
  TextStreamer,
} from "@huggingface/transformers";

const model_id = "onnx-community/Qwen3.5-0.8B-ONNX";
const processor = await AutoProcessor.from_pretrained(model_id);
const model = await Qwen3_5ForConditionalGeneration.from_pretrained(model_id, {
  dtype: {
    embed_tokens: "q4",
    vision_encoder: "fp16",
    decoder_model_merged: "q4",
  },
  device: "webgpu",
});

// Prepare inputs
const url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg";
const image = await (await RawImage.read(url)).resize(448, 448);
const conversation = [
  {
    role: "user",
    content: [
      { type: "image" },
      { type: "text", text: "Describe this image." },
    ],
  },
];
const text = processor.apply_chat_template(conversation, {
  add_generation_prompt: true,
});
const inputs = await processor(text, image);

const outputs = await model.generate({
  ...inputs,
  max_new_tokens: 512,
  streamer: new TextStreamer(processor.tokenizer, {
    skip_prompt: true,
    skip_special_tokens: false,
  }),
});

// Decode output
const decoded = processor.batch_decode(
  outputs.slice(null, [inputs.input_ids.dims.at(-1), null]),
  {
    skip_special_tokens: true,
  },
);
console.log(decoded[0]);

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:

    • We suggest using the following sets of sampling parameters depending on the mode and task type:

      • Non-thinking mode for text tasks:
        temperature=1.0, top_p=1.00, top_k=20, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
      • Non-thinking mode for VL tasks:
        temperature=0.7, top_p=0.80, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Thinking mode for text tasks:
        temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Thinking mode for VL or precise coding (e.g., WebDev) tasks:
        temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
    • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.

  2. Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.

  3. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.

    • Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
    • Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g., "answer": "C"."
  4. No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.

  5. Long Video Understanding: To optimize inference efficiency for plain text and images, the size parameter in the released video_preprocessor_config.json is conservatively configured. It is recommended to set the longest_edge parameter in the video_preprocessor_config file to 469,762,048 (corresponding to 224k video tokens) to enable higher frame-rate sampling for hour-scale videos and thereby achieve superior performance. For example,

    {"longest_edge": 469762048, "shortest_edge": 4096}
    

    Alternatively, override the default values via engine startup parameters. For implementation details, refer to: vLLM / SGLang.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3.5,
    title  = {{Qwen3.5}: Towards Native Multimodal Agents},
    author = {{Qwen Team}},
    month  = {February},
    year   = {2026},
    url    = {https://qwen.ai/blog?id=qwen3.5}
}
Downloads last month
7,452
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for onnx-community/Qwen3.5-0.8B-ONNX

Finetuned
Qwen/Qwen3.5-0.8B
Quantized
(59)
this model

Spaces using onnx-community/Qwen3.5-0.8B-ONNX 8