File size: 5,634 Bytes
18f3dac
 
ceab131
 
 
18f3dac
 
ceab131
18f3dac
ceab131
 
 
18f3dac
ceab131
18f3dac
ceab131
18f3dac
ceab131
18f3dac
ceab131
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18f3dac
2aae7ad
 
 
18f3dac
0750f7e
18f3dac
2aae7ad
 
 
18f3dac
2aae7ad
 
 
 
 
 
18f3dac
ceab131
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2aae7ad
ceab131
 
 
 
 
18f3dac
ceab131
 
 
 
 
 
 
 
 
97310ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ceab131
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
library_name: transformers
license: other
license_name: cc-by-nc-4.0
pipeline_tag: text-generation
---

# Nemotron-Flash-3B Instruct Model

<p align="center">
  🗞️ <a href="https://arxiv.org/pdf/2511.18890">Paper</a>&nbsp&nbsp| &nbsp&nbsp 🤗 <a href="https://huggingface.co/nvidia/Nemotron-Flash-1B">Nemotron-Flash-1B</a> | &nbsp&nbsp 🤗 <a href="https://huggingface.co/nvidia/Nemotron-Flash-3B">Nemotron-Flash-3B</a> | &nbsp&nbsp 🤗 <a href="https://huggingface.co/nvidia/Nemotron-Flash-3B-Instruct">Nemotron-Flash-3B-Instruct</a> &nbsp
</p>

## Model Overview

Nemotron-Flash is a new hybrid small language model family designed around real-world latency rather than parameter count. It features latency-optimal depth–width ratios, hybrid operators discovered through evolutionary search, and training-time weight normalization. See our <a href="https://arxiv.org/pdf/2511.18890">NeurIPS 2025 paper</a> for more technical details.

The models achieve SOTA accuracy in math, coding, and commonsense reasoning at the 1B and 3B scales, while delivering decent small-batch latency and large-batch throughput. For example, Nemotron-Flash-1B achieves +5.5% accuracy, 1.9× lower latency, and 45.6× higher throughput compared with Qwen3-0.6B; and Nemotron-Flash-3B achieves +2% / +5.5% accuracy over Qwen2.5-3B / Qwen3-1.7B with 1.3× / 1.7× lower latency and 6.4× / 18.7× higher throughput, respectively.


<div align="center">
<img src="https://huggingface.co/nvidia/Nemotron-Flash-3B/resolve/main/images/nemotron_flash_result.png" alt="Compare with SOTA SLMs" width="800">
</div>


## Environment
```bash
torch<=2.9.1
transformers<=4.56.2
causal-conv1d
flash-attn<=2.7.3
mamba-ssm
flash-linear-attention
```
We provide a <a href="https://huggingface.co/nvidia/Nemotron-Flash-3B/resolve/main/setup.sh">script</a> to build the conda environment: `bash setup.sh`.



## Chat with Nemotron-Flash

We integrated the attention kernel from <a href="https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html">TRT-LLM AutoDeploy</a> to enable generation with CUDA Graph:

```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo_name = "nvidia/Nemotron-Flash-3B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(repo_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(repo_name, trust_remote_code=True)
model = model.cuda().to(torch.bfloat16)

print('Initializing generation state...')
generation_state = model.init_cuda_graph_generation(
    max_new_tokens=max_new_tokens,
    batch_size=1,
    device='cuda',
)

prompt = input("User:")
prompt = "User: " + prompt + "\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt").to('cuda')

print(f"Generating with CUDA graph acceleration...")
outputs = model.generate_with_cuda_graph(
    input_ids=inputs["input_ids"],
    generation_state=generation_state,
    max_new_tokens=256,
    temperature=0,
    eos_token_id=tokenizer.eos_token_id,
)

response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(f"Response: {response}")
```

Another option is to perform generation w/o CUDA Graph:

```
outputs = model.generate_with_cache(
    input_ids=inputs["input_ids"],
    max_new_tokens=256,
    temperature=0,
    eos_token_id=tokenizer.eos_token_id,
    )
```

## Finetune Nemotron-Flash

To finetune Nemotron-Flash models, switch the attention kernel to FlashAttention2 when loading the model:

```
from transformers import AutoConfig, AutoModelForCausalLM
repo_name = "nvidia/Nemotron-Flash-3B-Instruct"

config = AutoConfig.from_pretrained(repo_name, trust_remote_code=True)
setattr(config, "attention_implementation_new", "flash_attention_2")
model = AutoModelForCausalLM.from_pretrained(repo_name, config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
```

## Running Nemotron-Flash with TensorRT-LLM

### Setup
Installation + quick start for TensorRT-LLM: <a href="https://nvidia.github.io/TensorRT-LLM/quick-start-guide.html">Tutorial</a>.

### Quick example

An example script for running through the generation workflow:
```
cd examples/auto_deploy
python build_and_run_ad.py --model nvidia/Nemotron-Flash-3B-Instruct --args.yaml-extra nemotron_flash.yaml
```

### Serving with trtllm-serve

- Spin up a trtllm server (more details are in this <a href="https://nvidia.github.io/TensorRT-LLM/commands/trtllm-serve/trtllm-serve.html#starting-a-server">doc</a>):
```
trtllm-serve serve nvidia/Nemotron-Flash-3B-Instruct \
--backend _autodeploy \
--trust_remote_code \
--extra_llm_api_options examples/auto_deploy/nemotron_flash.yaml
```

- Send a request (more details are in this <a href="https://nvidia.github.io/TensorRT-LLM/examples/curl_chat_client.html">doc</a>):
```
curl http://localhost:8000/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "nvidia/Nemotron-Flash-3B-Instruct",
        "messages":[{"role": "user", "content": "Where is New York?"}],
        "max_tokens": 16,
        "temperature": 0
    }'
```

## Citation
```
@misc{fu2025nemotronflash,
      title={Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models}, 
      author={Yonggan Fu and Xin Dong and Shizhe Diao and Matthijs Van keirsbilck and Hanrong Ye and Wonmin Byeon and Yashaswi Karnati and Lucas Liebenwein and Hannah Zhang and Nikolaus Binder and Maksim Khadkevich and Alexander Keller and Jan Kautz and Yingyan Celine Lin and Pavlo Molchanov},
      year={2025},
      eprint={2511.18890},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2511.18890}, 
}